Re: [openstack-dev] [FaaS] Introduce a FaaS project

2017-05-15 Thread Li Ma
That's interesting. Serverless is a general computing engine that can
brings lots of possibility of how to make use of resource managed by
OpenStack. I'd like to see a purely OpenStack-powered solution there.

Do you have submitted a proposal to create this project under
OpenStack umbrella?

On Mon, May 15, 2017 at 9:36 AM, Lingxian Kong <anlin.k...@gmail.com> wrote:
> Yes, I am recreating the wheels :-)
>
> I am sending this email not intend to say Qinling[1] project is a better
> option than others as a project of function as a service, I just provide
> another
> possibility for developers/operators already in OpenStack world, and try my
> luck to seek people who have the same interest in serverless area and
> cooperate
> together to make it more and more mature if possible, because I see
> serverless
> becomes more and more popular in current agile IT world but I don't see
> there
> is a good candidate in OpenStack ecosystem.
>
> I remember I asked the question that if we have a FaaS available project in
> OpenStack, what I got are something like: Picasso[2], OpenWhisk[3], etc, but
> IMHO, none of them can be well integrated with OpenStack ecosystem. I don't
> mean they are not good, on the contrary, they are good, especially OpenWhisk
> which is already deployed and available in IBM Bluemix production. Picasso
> is
> only a very thin proxy layer to IronFunctions which is an open source
> project
> comes from Iron.io company who also has a commercial FaaS product.
>
> However, there are several reasons make me create a new project:
>
> - Maybe not many OpenStack operators/developers want to touch a project
>   written in another programming language besides Python (and maybe Go? not
> sure
>   the result of TC resolution). The deployment/dependency management/code
>   maintenance will bring much more overhead.
>
> - I'd like to see a project which is using the similar
>   components/infrastructure as most of the other OpenStack projects, e.g.
>   keystone authentication, message queue(in order to receive notification
> from
>   Panko then trigger functions), database, oslo library, swift(for code
>   package storage), etc. Of course, I could directly contribute and modify
>   some existing project(e.g. Picasso) to satisfy these conditions, but I am
>   afraid the time and effort it could take is exactly the same as if I
> create
>   a new one.
>
> - I'd like to see a project with no vendor/platform lock-in. Most of the
> FaaS
>   projects are based on one specific container orchestration platform or
> want
>   to promote usage of its own commercial product. For me, it's always a good
>   thing to have more technical options when evaluating a new service.
>
> Qinling project is still at the very very early stage. I created it one
> month ago
> and work on it only in my spare time. But it works, you can see a basic
> usage
> introduction in README.rst and give it a try. A lot of things are still
> missing, CLI, UT, devstack plugin, UI, etc.
>
> Of course, you can ignore me (still appreciate you read here) if you think
> it's really not necessary and stupid to create such a project in OpenStack,
> or you can join me to discuss what we could do to improve it gradually and
> provide a better option for a real function as a service to people in
> OpenStack world.
>
> [1]: https://github.com/LingxianKong/qinling
> [2]: https://github.com/openstack/picasso
> [3]: https://github.com/openwhisk/openwhisk
>
> Cheers,
> Lingxian Kong (Larry)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr][Dragonflow] - PTL Non Candacy

2016-09-12 Thread Li Ma
It was a good pleasure to work with you, Gal.You did an awesome job
for Dragonflow.

Thanks for all your help and guide in this project. Hopefully we can
get-together
some time or cooperate in other projects in the future.

On Mon, Sep 12, 2016 at 4:31 PM, Gal Sagie <gal.sa...@gmail.com> wrote:
> Hello all,
>
> I would like to announce that i will not be running for projects Kuryr or
> Dragonflow
> PTL.
>
> I believe that both projects shows great value and progress compared to the
> time they exists
> mostly thanks to the great communities actively working on both of them.
>
> I also strongly believe that the PTL position is one that should be
> alternating given there is
> a good candidate and i believe there are some great candidates for both
> projects.
>
> I will of course still stay involved in both projects and excited to see
> what the next
> release is going to bring.
>
> Thanks for everyone that closely helped and contributed and lets keep up on
> making
> OpenStack great together.
>
> Gal.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 回复: [neutron]dragonflow deployment incorrectness

2016-08-02 Thread Li Ma
DATA[],resubmit(,72)
>  cookie=0x0, duration=1710.876s, table=64, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100,reg7=0x1a
> actions=load:0x2->OXM_OF_METADATA[],resubmit(,72)
>  cookie=0x0, duration=1710.876s, table=64, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100,reg7=0x1c
> actions=load:0x3->OXM_OF_METADATA[],resubmit(,72)
>  cookie=0x0, duration=1710.869s, table=64, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100,reg7=0x1b
> actions=load:0x2->OXM_OF_METADATA[],resubmit(,72)
>  cookie=0x0, duration=1710.853s, table=66, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100 actions=output:1
>  cookie=0x0, duration=1710.883s, table=72, n_packets=0, n_bytes=0,
> idle_age=1710, priority=1 actions=resubmit(,78)
>  cookie=0x0, duration=1710.853s, table=77, n_packets=0, n_bytes=0,
> idle_age=1710, priority=1 actions=drop
>  cookie=0x0, duration=1710.883s, table=78, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100,reg7=0x17 actions=output:19
>  cookie=0x0, duration=1710.877s, table=78, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100,reg7=0x15 actions=output:13
>  cookie=0x0, duration=1710.876s, table=78, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100,reg7=0x1a actions=output:15
>  cookie=0x0, duration=1710.876s, table=78, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100,reg7=0x1c actions=output:20
>  cookie=0x0, duration=1710.869s, table=78, n_packets=0, n_bytes=0,
> idle_age=1710, priority=100,reg7=0x1b actions=output:18
>
> Could anyone check with this problem for me ?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [dragonflow] The failure of fullstack tests

2016-07-24 Thread Li Ma
Currently, we suffer from the fullstack failures. At the first glance,
I noticed the infinite wait at the line 'self.policy.wait(30)' until
eventlet exception happened in fullstack.test_apps.TestArpResponder.

There were no logs(df-controller and q-svc) generated for that test
case, which means the test even didn't start.

However, I couldn't reproduce it in my local environment.

Then, I submitted a patch to skip TestArpResponder test case. I'd like
to see what would happen for it. Finally, the next test case
TestDHCPApp failed at the same line [1].

I guess that it has something to do with the policy object in [2], but
currently I have no idea how to debug at the CI, and it is working for
the local devstack.

[1] 
http://logs.openstack.org/64/343464/9/check/gate-dragonflow-dsvm-fullstack-nv/1c333c6/testr_results.html.gz

[2] 
https://github.com/openstack/dragonflow/blob/master/dragonflow/tests/common/app_testing_objects.py

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [midonet] NetlinkInterfaceSensor timeout causes failure of networking

2016-06-22 Thread Li Ma
Hi Midoers,

I'm running a small production system for a long time, which deploys
v2014.11 version because the Openstack cluster is Icehouse and the
end-user doesn't want to upgrade it.

Currently, this exception occurs periodically which causes network
connection failure for virtual machines.

WARN  [interface-scanner] NetlinkInterfaceSensor -  Timeout exception
thrown with value: 3000 ms

This exception occurs almost all the midolman process. If we restart
the midolman, the problem disappears for a while, but re-occurs in
several minutes. The warning is randomly appeared.

Could you guys provide some hints on the logic about it and how to get
rid of it? I just guess it is not a bug, maybe some configuration
tuning. When I went through the codes, I couldn't find the problems.

Thanks a lot,
-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-bagpipe] how to deploy devstack with bgpvpn and bagpipe driver

2016-06-15 Thread Li Ma
Hi all,

I would like to evaluate bgpvpn and bagpipe driver in my local
machine. I cannot find any useful deployment guide about it.

I find a github repository for it, but it seems out of date (3 months
ago). I'd appreciate it if someone can provide a working devstack
configuration for multiple nodes.

Best regards,
-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [dragonflow] question about the schedule during design summit

2016-03-27 Thread Li Ma
Hi all,

I will have a tight schedule during design summit. I'd like to know
about the arrangement in Apr 28-29. So, I can try to re-schedule to
fit for the arrangement if needed.

Thanks,
-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [dragonflow] db consistency progress

2016-02-26 Thread Li Ma
2016年2月24日星期三,Gal Sagie <gal.sa...@gmail.com
<javascript:_e(%7B%7D,'cvml','gal.sa...@gmail.com');>> 写道:

> Hi Li Ma,
>
> Great job on this!
>
> I think Rally is the correct way to test something like that, what you
> could use to check the end to end
> solution is try to restart the DF DB server for timeouts during API
> configurations.
>
> At this point of time, i think this is really advance testing and i
> wouldn't bother too much right now,
> we need to make everything else more stable and then get back to this.
>
>
OK, sure.
-- 

Best regards,
Li Ma (Nick)

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dragonflow]Part of the OpenStack Big-tent

2016-02-24 Thread Li Ma
Congrats ;-)

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [dragonflow] db consistency progress

2016-02-24 Thread Li Ma
Hi all, today I submitted the working patch for review:

https://review.openstack.org/#/c/282290/

I tested it by invoking many concurrent API calls to update a given Neutron
object, both for standalone and clustering of MySQL. The result is that no
reported bugs about inconsistency were triggered.

At the current stage, I'd like to discuss about testing. I think it needs
to be tested at the gate by fullstack or rally. However, due to the
uncertainty of the problem, even without this patch, the bug is not always
triggered. How to make sure the test cases are reliable is the problem here.

Any thoughts? Thanks a lot.

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Dragonflow] Atomic update doesn't work with etcd-driver

2015-12-28 Thread Li Ma
Hi Gal, you reverted this patch [1] due to the broken pipeline. Could
you provide some debug information or detailed description? When I run
my devstack, I cannot reproduce the sync problem.

[1] 
https://github.com/openstack/dragonflow/commit/f83dd5795d54e1a70b8bdec1e6dd9f7815eb6546

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Dragonflow] Support configuration of DB clusters

2015-12-27 Thread Li Ma
My intention is to pass db-host-list (maybe it is defined in the conf
file) to db backend drivers. I find that there's '**args' available
[1], but it seems not working due to [2].

I suggest to use a simpler method to allow user-defined configuration that is
removing db_ip and db_port parameters and directly passing cfg.CONF
object to db backend driver.

In db_api.py, it should be:
def initialize(self, config):
self.config = config

In api_nb.py, it should be:
def initialize(self):
self.driver.initialize(cfg.CONF) <-- from oslo_config

As a result, let db backend developers choose which parameter to use.

[1] 
https://github.com/openstack/dragonflow/blob/master/dragonflow/db/db_api.py#L21
[2] 
https://github.com/openstack/dragonflow/blob/master/dragonflow/db/api_nb.py#L74-L75

On Mon, Dec 28, 2015 at 9:12 AM, shihanzhang <ayshihanzh...@126.com> wrote:
>
> good suggestion!
>
>
> At 2015-12-25 19:07:10, "Li Ma" <skywalker.n...@gmail.com> wrote:
>>Hi all, currently, we only support db_ip and db_port in the
>>configuration file. Some DB SDK supports clustering, like Zookeeper.
>>You can specify a list of nodes when client application starts to
>>connect to servers.
>>
>>I'd like to implement this feature, specifying ['ip1:port',
>>'ip2:port', 'ip3:port'] list in the configuration file. If only one
>>server exists, just set it to ['ip1:port'].
>>
>>Any suggestions?
>>
>>--
>>
>>Li Ma (Nick)
>>Email: skywalker.n...@gmail.com
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Dragonflow] Support configuration of DB clusters

2015-12-25 Thread Li Ma
Hi all, currently, we only support db_ip and db_port in the
configuration file. Some DB SDK supports clustering, like Zookeeper.
You can specify a list of nodes when client application starts to
connect to servers.

I'd like to implement this feature, specifying ['ip1:port',
'ip2:port', 'ip3:port'] list in the configuration file. If only one
server exists, just set it to ['ip1:port'].

Any suggestions?

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MidoNet] Fwd: [MidoNet-Dev] Sync virtual topology data from neutron DB to Zookeeper?

2015-12-16 Thread Li Ma
It sounds great that you guys completely move to OpenStack. Sorry for
late reply, I'm in a business trip for more than half a month which is
really exhausted.

OK, I know that you guys have some plan. I do need such a tool in the
near future. It's really important for operation.

Thanks, Ryu.

On Tue, Dec 15, 2015 at 12:03 AM, Ryu Ishimoto <r...@midokura.com> wrote:
> Hi Nick,
>
> We have already designed the data sync feature[1], but this
> development was suspended temporarily in favor of completing the v5.0
> development of MidoNet.
>
> We will be resuming development work on this project soon (with high 
> priority).
>
> It sounds to me like you need a completed, mature tool immediately to
> achieve what you want, which we cannot provide right now.  There is a
> networking-midonet meeting tomorrow on IRC at 07:00UTC [2] if you want
> to discuss this further.  We could try to brainstorm possible
> solutions together.
>
> Ryu
>
> [1] 
> https://github.com/openstack/networking-midonet/blob/master/specs/kilo/data_sync.rst
> [2] http://eavesdrop.openstack.org/#Networking_Midonet_meeting
>
> On Mon, Dec 14, 2015 at 11:08 PM, Galo Navarro <g...@midokura.com> wrote:
>> Hi Li,
>>
>> Sorry for the late reply. Unrelated point: please note that we've
>> moved the mailing lists to Openstack infra
>> (openstack-dev@lists.openstack.org  - I'm ccing the list here).
>>
>> At the moment we don't support syncing the full Neutron DB, there has
>> been work done for this that would allow this use case, but it's still
>> not complete or released.
>>
>> @Ryu may be able to provide recommendations to do this following a
>> manual process.
>>
>> Cheers,
>> g
>>
>>
>>
>> On 4 December 2015 at 09:27, Li Ma <skywalker.n...@gmail.com> wrote:
>>> Hi midoers,
>>>
>>> I have an OpenStack cloud with neutron ML2+OVS. I'd like to switch
>>> from OVS to MidoNet in that cloud.
>>>
>>> Actually the neutron DB stores all the existing virtual topology. I
>>> wonder if there's some guides or ops tools for MidoNet to sync data
>>> from the neutron DB to Zookeeper.
>>>
>>> Thanks a lot,
>>> --
>>>
>>> Li Ma (Nick)
>>> Email: skywalker.n...@gmail.com
>>> ___
>>> MidoNet mailing list
>>> mido...@lists.midonet.org
>>> http://lists.midonet.org/listinfo/midonet
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [midonet] ping cannot work from VM to external gateway IP.

2015-12-16 Thread Li Ma
Hi Midoers,

I have a platform running Midonet 2015 (I think it is the last release
when you switch to 5.0).
I cannot ping from VM to external gateway IP (which is set up at the
physical router side).

VM inter-connectivity is OK.

When I tcpdump packets on the physical interface located in the gateway node,
I just grabbed lots of ARP requests to external gateway IP.

I'm not sure how midonet gateway manages ARP?
Will the ARP be cached on the gateway host?

Can I specify a static ARP record by 'ip command' on gateway node to
solve it quickly (not gracefully)?

(Currently I'm in the business trip that cannot touch the environment.
So, I'd like to get some ideas first and then I can tell my partners
to work on it.)

Thanks a lot,

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] ping cannot work from VM to external gateway IP.

2015-12-16 Thread Li Ma
Updated:

Lots of ARP requests from external physical router to VM are catched
on the physical NIC binded to provider router port.

It seems that external physical router doesn't get answers to these
ARP requests.

On Wed, Dec 16, 2015 at 8:08 PM, Li Ma <skywalker.n...@gmail.com> wrote:
> Hi Midoers,
>
> I have a platform running Midonet 2015 (I think it is the last release
> when you switch to 5.0).
> I cannot ping from VM to external gateway IP (which is set up at the
> physical router side).
>
> VM inter-connectivity is OK.
>
> When I tcpdump packets on the physical interface located in the gateway node,
> I just grabbed lots of ARP requests to external gateway IP.
>
> I'm not sure how midonet gateway manages ARP?
> Will the ARP be cached on the gateway host?
>
> Can I specify a static ARP record by 'ip command' on gateway node to
> solve it quickly (not gracefully)?
>
> (Currently I'm in the business trip that cannot touch the environment.
> So, I'd like to get some ideas first and then I can tell my partners
> to work on it.)
>
> Thanks a lot,
>
> --
>
> Li Ma (Nick)
> Email: skywalker.n...@gmail.com



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [midonet] how to configure static route for uplink

2015-12-16 Thread Li Ma
Hi, I'm following [1] to configure static route for uplink. I'm not
sure whether I'm getting it.

(1) Floating-IP subnet (gateway?) configuration in the guide?
(2) eth0 configuration in the guide?
(3) Do I need to configure uplink physical router? back route to eth0?
(4) What if I just bind router0:port0 to physical nic: eth0, and don't
do the uplink bridge and veth pair stuff. Can it work?

[1] 
https://docs.midonet.org/docs/v2015.06/en/operations-guide/content/static_setup.html

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] proposal: dedicated tunnel for carrying mirrored traffic

2015-11-18 Thread Li Ma
It is suggested that you can issue a RFE request for it. [1] We can
discuss with it and track the progress in the launchpad.

By the way, I'm very interested in it. I discussed a another problem
with Huawei neutron engineers about abstract VTEP to neutron port. It
allows managing VTEP and can leverage the flexibility in many aspects
as more and more neutron features need VTEP management, just like your
proposal.

[1] http://docs.openstack.org/developer/neutron/policies/blueprints.html

On Thu, Nov 19, 2015 at 11:36 AM, Soichi Shigeta
<shigeta.soi...@jp.fujitsu.com> wrote:
>
>  Hi,
>
> As we decided in the last weekly meeting,
>   I'd like to use this mailing list to discuss
>   a proposal about creating dedicated tunnel for
>   carrying mirrored traffic between hosts.
>
>   link:
> https://wiki.openstack.org/w/images/7/78/TrafficIsolation_20151116-01.pdf
>
>   Best Regards,
>   Soichi Shigeta
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron debugging tool

2015-09-20 Thread Li Ma
AFAIK, there is a project available in the github that does the same thing.
https://github.com/yeasy/easyOVS

I used it before.

On Mon, Sep 21, 2015 at 12:17 AM, Nodir Kodirov <nodir.qodi...@gmail.com> wrote:
> Hello,
>
> I am planning to develop a tool for network debugging. Initially, it
> will handle DVR case, which can also be extended to other too. Based
> on my OpenStack deployment/operations experience, I am planning to
> handle common pitfalls/misconfigurations, such as:
> 1) check external gateway validity
> 2) check if appropriate qrouter/qdhcp/fip namespaces are created in
> compute/network hosts
> 3) execute probing commands inside namespaces, to verify reachability
> 4) etc.
>
> I came across neutron-debug [1], which mostly focuses on namespace
> debugging. Its coverage is limited to OpenStack, while I am planning
> to cover compute/network nodes as well. In my experience, I had to ssh
> to the host(s) to accurately diagnose the failure (e.g., 1, 2 cases
> above). The tool I am considering will handle these, given the host
> credentials.
>
> I'd like get community's feedback on utility of such debugging tool.
> Do people use neutron-debug on their OpenStack environment? Does the
> tool I am planning to develop with complete diagnosis coverage sound
> useful? Anyone is interested to join the development? All feedback are
> welcome.
>
> Thanks,
>
> - Nodir
>
> [1] 
> http://docs.openstack.org/cli-reference/content/neutron-debug_commands.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [oslo.privsep] Any progress on privsep?

2015-09-18 Thread Li Ma
Thanks for your reply, Gus. That's awesome. I'd like to have a look at
it or test if possible.

Any source code available in the upstream?

On Fri, Sep 18, 2015 at 12:40 PM, Angus Lees <g...@inodes.org> wrote:
> On Fri, 18 Sep 2015 at 14:13 Li Ma <skywalker.n...@gmail.com> wrote:
>>
>> Hi stackers,
>>
>> Currently we are discussing the possibility of using a pure python
>> library to configure network in neutron [1]. We find out that it is
>> impossible to do it without privsep, because we run external commands
>> which cannot be replaced by python calls via rootwrap.
>>
>> Privsep has been merged in the Liberty cycle. I just wonder how it is
>> going on.
>>
>> [1] https://bugs.launchpad.net/neutron/+bug/1492714
>
>
> Thanks for your interest :)  This entire cycle has been spent on the spec.
> It looks like it might be approved very soon (got the first +2 overnight),
> which will then unblock a string of "create new oslo project" changes.
>
> During the spec discussion, the API was changed (for the better).  Now it
> looks like the discussion has settled down, I'm getting to work rewriting it
> following the new API.  It took me about 2 weeks to write it the first time
> around (almost all on testing framework), so I'd expect something of similar
> magnitude this time.
>
> I don't make predictions about timelines that rely on the OpenStack review
> process, but if you forced me I'd _guess_ it will be ready for projects to
> try out early in M.
>
>  - Gus
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [dragonflow] Low OVS version for Ubuntu

2015-09-17 Thread Li Ma
Hi all,

I tried to run devstack to deploy dragonflow, but I failed with lower
OVS version.

I used Ubuntu 14.10 server, but the official package of OVS is 2.1.3
which is much lower than the required version 2.3.1+?

So, can anyone provide a Ubuntu repository that contains the correct
OVS packages?

Thanks,
-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Please help review this RFE

2015-09-17 Thread Li Ma
Hi Neutron folks,

I'd like to introduce a pure python-driven network configuration
library to Neutron. A discussion just started in the RFE ticket [1].
I'd like to get feedback on this proposal.

[1]: https://bugs.launchpad.net/neutron/+bug/1492714

Take a look and let me know your thoughts.
-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] RFE process question

2015-09-17 Thread Li Ma
A reasonable user story. Other than tag, a common description field
for Neutron resources is also usable.

I submitted a RFE bug for review:
https://bugs.launchpad.net/neutron/+bug/1496705

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [oslo.privsep] Any progress on privsep?

2015-09-17 Thread Li Ma
Hi stackers,

Currently we are discussing the possibility of using a pure python
library to configure network in neutron [1]. We find out that it is
impossible to do it without privsep, because we run external commands
which cannot be replaced by python calls via rootwrap.

Privsep has been merged in the Liberty cycle. I just wonder how it is going on.

[1] https://bugs.launchpad.net/neutron/+bug/1492714

Thanks a lot,
-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dragonflow] Low OVS version for Ubuntu

2015-09-17 Thread Li Ma
Thanks all your responses. I just wonder if there is a quick path for me.
I'll rebuild it from source then.

On Thu, Sep 17, 2015 at 11:50 PM, Assaf Muller <amul...@redhat.com> wrote:
> Another issue is that the gate is running with Ubuntu 14.04, which is
> running OVS 2.0. This means we can't test
> certain features in Neutron (For example, the OVS ARP responder).
>
> On Thu, Sep 17, 2015 at 4:17 AM, Gal Sagie <gal.sa...@gmail.com> wrote:
>>
>> Hello Li Ma,
>>
>> Dragonflow uses OpenFlow1.3 to communicate with OVS and thats why we need
>> OVS 2.3.1.
>> As suggested you can build it from source.
>> For Fedora 21 OVS2.3.1 is part of the default yum repository.
>>
>> You can ping me on IRC (gsagie at freenode) if you need any additional
>> help how
>> to compile OVS.
>>
>> Thanks
>> Gal.
>>
>> On Thu, Sep 17, 2015 at 10:24 AM, Sudipto Biswas
>> <sbisw...@linux.vnet.ibm.com> wrote:
>>>
>>>
>>>
>>> On Thursday 17 September 2015 12:22 PM, Li Ma wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I tried to run devstack to deploy dragonflow, but I failed with lower
>>>> OVS version.
>>>>
>>>> I used Ubuntu 14.10 server, but the official package of OVS is 2.1.3
>>>> which is much lower than the required version 2.3.1+?
>>>>
>>>> So, can anyone provide a Ubuntu repository that contains the correct
>>>> OVS packages?
>>>
>>>
>>> Why don't you just build the OVS you want from here:
>>> http://openvswitch.org/download/
>>>
>>>> Thanks,
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova] some conflicts in bridge-mapping in linuxbridge

2015-07-30 Thread Li Ma
Hi all,

Currently, I'm implementing this blueprint:
https://blueprints.launchpad.net/neutron/+spec/phy-net-bridge-mapping

This function enables a user-defined pre-existed bridge to connect
instances , rather than creating a new bridge that may break security
rule of some companies.

Neutron code has been done recently, but I find that it is really
difficult to implement due to Nova.

Nova hard-code 'brq' + net-id as bridge name in libvirt configuration.

In nova/network/neutronv2/api.py:

def _nw_info_build_network(self, port, networks, subnets):

1530 elif vif_type == network_model.VIF_TYPE_BRIDGE:
1531 bridge = brq + port['network_id']

For example, in flat network type, when Neutron loads a pre-existed
user-defined bridge called 'br-eth1' and would like to let it connect
the instances, it will fail because Nova writes 'brq-net-id' into
libvirt conf file.

I have several solutions for that.

(1) Add the same 'bridge-mapping' configuration in nova.conf for
nova-compute. It is almost the same implementation in OVS-agent.
When nova needs bridge name to process, it first reads that
configuration and then decide which bridge to use.

(2) Let Neutron decide interface name (tap-xxx) and bridge name
(brq-xxx), not Nova. As a result, when creating port, Neutron return
interface name and bridge name to Nova, so Nova has the right device
name to process. This helps Nova get rid of hard-code, but need lots
of work on it.

Pls give some advice. Thanks a lot.

Li Ma (Nick)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][binding:host_id] Binding:host_id changes when creating vm.

2015-07-27 Thread Li Ma
Nova will modify it when scheduling. You should let nova schedule the
vm to the given host: dvr-compute1.novalocal.

On Mon, Jul 27, 2015 at 3:14 PM, Kevin Benton blak...@gmail.com wrote:
 If it's a VM, Nova sets the binding host id. That field is set by the system
 using the port. It's not a way to move ports around.

 On Jul 26, 2015 20:33, 于洁 16189...@qq.com wrote:

 Hi all,

 Recently I used the parameter binding:host_id to create port,  trying to
 allocate the port to a specified host. For example:
   neutron port-create
 e77c556b-7ec8-415a-8b92-98f2f4f3784f
 --binding:host_id=dvr-compute1.novalocal
 But when creating a vm assigning to the port created above, the
 binding:host_id changed. Is this normal? Or the parameter binding:host_id
 does not work well?
 Any suggestion is grateful. Thank you.

 Yu


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][zeromq] Some backports to stable/kilo

2015-04-09 Thread Li Ma
Hi oslo all,

Currently devstack master relies on 1.8.1 release due to requirements
frozen (=1.8.0  1.9.0), however, ZeroMQ driver is able to run on
1.9.0 release. The result is that you cannot deploy ZeroMQ driver
using devstack master now due to some incompatibility between
oslo.messaging 1.8.1 and devstack master source.

So I try to backport 4 recent reviews [1-4] to stable/kilo to make
sure it is working. I'll appreciate allowing these backports and make
them into 1.8.2.

[1] https://review.openstack.org/172038
[2] https://review.openstack.org/172061
[3] https://review.openstack.org/172062
[4] https://review.openstack.org/172063

Best regards,
-- 
Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] oslo.messaging 1.9.0

2015-04-09 Thread Li Ma
 -oslo.config=1.9.0  # Apache-2.0
 -oslo.utils=1.2.0   # Apache-2.0
 -oslo.serialization=1.2.0   # Apache-2.0
 -oslo.i18n=1.3.0  # Apache-2.0
 -stevedore=1.1.0  # Apache-2.0
 +oslo.config=1.9.3,1.10.0  # Apache-2.0
 +oslo.context=0.2.0,0.3.0 # Apache-2.0
 +oslo.utils=1.4.0,1.5.0   # Apache-2.0
 +oslo.serialization=1.4.0,1.5.0   # Apache-2.0
 +oslo.i18n=1.5.0,1.6.0  # Apache-2.0
 +stevedore=1.3.0,1.4.0  # Apache-2.0
 @@ -19 +20 @@ six=1.9.0
 -eventlet=0.16.1
 +eventlet=0.16.1,!=0.17.0
 @@ -28 +29 @@ kombu=2.5.0
 -oslo.middleware=0.3.0  # Apache-2.0
 +oslo.middleware=1.0.0,1.1.0  # Apache-2.0
 diff --git a/test-requirements-py3.txt b/test-requirements-py3.txt
 index 937c9f2..f137195 100644
 --- a/test-requirements-py3.txt
 +++ b/test-requirements-py3.txt
 @@ -16 +16 @@ testtools=0.9.36,!=1.2.0
 -oslotest=1.2.0  # Apache-2.0
 +oslotest=1.5.1,1.6.0  # Apache-2.0
 @@ -25 +25 @@ sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
 -oslosphinx=2.2.0  # Apache-2.0
 +oslosphinx=2.5.0,2.6.0  # Apache-2.0
 diff --git a/test-requirements.txt b/test-requirements.txt
 index 0b2a583..5afaa74 100644
 --- a/test-requirements.txt
 +++ b/test-requirements.txt
 @@ -16 +16 @@ testtools=0.9.36,!=1.2.0
 -oslotest=1.2.0  # Apache-2.0
 +oslotest=1.5.1,1.6.0  # Apache-2.0
 @@ -34 +34 @@ sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
 -oslosphinx=2.2.0  # Apache-2.0
 +oslosphinx=2.5.0,2.6.0  # Apache-2.0
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] oslo.messaging 1.9.0

2015-04-09 Thread Li Ma
https://review.openstack.org/172045

On Thu, Apr 9, 2015 at 9:21 PM, Li Ma skywalker.n...@gmail.com wrote:
 Hi Doug,

 In the global requirements.txt, oslo.messaging version is still =
 1.8.0 but  1.9.0. As a result, some bugs fixed in 1.9.0 are still
 there when I deploy with devstack master branch.

 I submitted a review for the update.

 On Wed, Mar 25, 2015 at 10:22 PM, Doug Hellmann d...@doughellmann.com wrote:
 We are content to announce the release of:

 oslo.messaging 1.9.0: Oslo Messaging API

 This is the first release of the library for the Liberty development cycle.

 For more details, please see the git log history below and:

 http://launchpad.net/oslo.messaging/+milestone/1.9.0

 Please report issues through launchpad:

 http://bugs.launchpad.net/oslo.messaging

 Changes in oslo.messaging 1.8.0..1.9.0
 --

 8da14f6 Use the oslo_utils stop watch in decaying timer
 ec1fb8c Updated from global requirements
 84c0d3a Remove 'UNIQUE_ID is %s' logging
 9f13794 rabbit: fix ipv6 support
 3f967ef Create a unique transport for each server in the functional tests
 23dfb6e Publish tracebacks only on debug level
 53fde06 Add pluggability for matchmakers
 b92ea91 Make option [DEFAULT]amqp_durable_queues work
 cc618a4 Reconnect on connection lost in heartbeat thread
 f00ec93 Imported Translations from Transifex
 0dff20b cleanup connection pool return
 2d1a019 rabbit: Improves logging
 0ec536b fix up verb tense in log message
 b9e134d rabbit: heartbeat implementation
 72a9984 Fix changing keys during iteration in matchmaker heartbeat
 cf365fe Minor improvement
 5f875c0 ZeroMQ deployment guide
 410d8f0 Fix a couple typos to make it easier to read.
 3aa565b Tiny problem with notify-server in simulator
 0f87f5c Fix coverage report generation
 3be95ad Add support for multiple namespaces in Targets
 513ce80 tools: add simulator script
 0124756 Deprecates the localcontext API
 ce7d5e8 Update to oslo.context
 eaa362b Remove obsolete cross tests script
 1958f6e Fix the bug redis do not delete the expired keys
 9f457b4 Properly distinguish between server index zero and no server
 0006448 Adjust tests for the new namespace

 Diffstat (except docs and test files)
 -

 .coveragerc|   7 +
 openstack-common.conf  |   6 +-
 .../locale/de/LC_MESSAGES/oslo.messaging.po|  48 ++-
 .../locale/en_GB/LC_MESSAGES/oslo.messaging.po |  48 ++-
 .../locale/fr/LC_MESSAGES/oslo.messaging.po|  40 ++-
 oslo.messaging/locale/oslo.messaging.pot   |  50 ++-
 oslo_messaging/_drivers/amqp.py|  55 +++-
 oslo_messaging/_drivers/amqpdriver.py  |  15 +-
 oslo_messaging/_drivers/common.py  |  20 +-
 oslo_messaging/_drivers/impl_qpid.py   |   4 +-
 oslo_messaging/_drivers/impl_rabbit.py | 357 
 ++---
 oslo_messaging/_drivers/impl_zmq.py|  32 +-
 oslo_messaging/_drivers/matchmaker.py  |   2 +-
 oslo_messaging/_drivers/matchmaker_redis.py|   7 +-
 oslo_messaging/localcontext.py |  16 +
 oslo_messaging/notify/dispatcher.py|   4 +-
 oslo_messaging/notify/middleware.py|   2 +-
 oslo_messaging/openstack/common/_i18n.py   |  45 +++
 oslo_messaging/openstack/common/versionutils.py| 253 +++
 oslo_messaging/rpc/dispatcher.py   |   6 +-
 oslo_messaging/target.py   |   9 +-
 requirements-py3.txt   |  13 +-
 requirements.txt   |  15 +-
 setup.cfg  |   6 +
 test-requirements-py3.txt  |   4 +-
 test-requirements.txt  |   4 +-
 tools/simulator.py | 207 
 tox.ini|   3 +-
 43 files changed, 1673 insertions(+), 512 deletions(-)


 Requirements updates
 

 diff --git a/requirements-py3.txt b/requirements-py3.txt
 index 05cb050..4ec18c6 100644
 --- a/requirements-py3.txt
 +++ b/requirements-py3.txt
 @@ -5,5 +5,6 @@
 -oslo.config=1.9.0  # Apache-2.0
 -oslo.serialization=1.2.0   # Apache-2.0
 -oslo.utils=1.2.0   # Apache-2.0
 -oslo.i18n=1.3.0  # Apache-2.0
 -stevedore=1.1.0  # Apache-2.0
 +oslo.config=1.9.3,1.10.0  # Apache-2.0
 +oslo.context=0.2.0,0.3.0 # Apache-2.0
 +oslo.serialization=1.4.0,1.5.0   # Apache-2.0
 +oslo.utils=1.4.0,1.5.0   # Apache-2.0
 +oslo.i18n=1.5.0,1.6.0  # Apache-2.0
 +stevedore=1.3.0,1.4.0  # Apache-2.0
 @@ -21 +22 @@ kombu=2.5.0
 -oslo.middleware=0.3.0  # Apache-2.0
 +oslo.middleware=1.0.0,1.1.0  # Apache-2.0
 diff --git a/requirements.txt b/requirements.txt
 index

Re: [openstack-dev] [release] oslo.messaging 1.9.0

2015-04-09 Thread Li Ma
 @@
  -oslo.config=1.9.0  # Apache-2.0
  -oslo.serialization=1.2.0   # Apache-2.0
  -oslo.utils=1.2.0   # Apache-2.0
  -oslo.i18n=1.3.0  # Apache-2.0
  -stevedore=1.1.0  # Apache-2.0
  +oslo.config=1.9.3,1.10.0  # Apache-2.0
  +oslo.context=0.2.0,0.3.0 # Apache-2.0
  +oslo.serialization=1.4.0,1.5.0   # Apache-2.0
  +oslo.utils=1.4.0,1.5.0   # Apache-2.0
  +oslo.i18n=1.5.0,1.6.0  # Apache-2.0
  +stevedore=1.3.0,1.4.0  # Apache-2.0
  @@ -21 +22 @@ kombu=2.5.0
  -oslo.middleware=0.3.0  # Apache-2.0
  +oslo.middleware=1.0.0,1.1.0  # Apache-2.0
  diff --git a/requirements.txt b/requirements.txt
  index 3b49a53..ec5fef6 100644
  --- a/requirements.txt
  +++ b/requirements.txt
  @@ -7,5 +7,6 @@ pbr=0.6,!=0.7,1.0
  -oslo.config=1.9.0  # Apache-2.0
  -oslo.utils=1.2.0   # Apache-2.0
  -oslo.serialization=1.2.0   # Apache-2.0
  -oslo.i18n=1.3.0  # Apache-2.0
  -stevedore=1.1.0  # Apache-2.0
  +oslo.config=1.9.3,1.10.0  # Apache-2.0
  +oslo.context=0.2.0,0.3.0 # Apache-2.0
  +oslo.utils=1.4.0,1.5.0   # Apache-2.0
  +oslo.serialization=1.4.0,1.5.0   # Apache-2.0
  +oslo.i18n=1.5.0,1.6.0  # Apache-2.0
  +stevedore=1.3.0,1.4.0  # Apache-2.0
  @@ -19 +20 @@ six=1.9.0
  -eventlet=0.16.1
  +eventlet=0.16.1,!=0.17.0
  @@ -28 +29 @@ kombu=2.5.0
  -oslo.middleware=0.3.0  # Apache-2.0
  +oslo.middleware=1.0.0,1.1.0  # Apache-2.0
  diff --git a/test-requirements-py3.txt b/test-requirements-py3.txt
  index 937c9f2..f137195 100644
  --- a/test-requirements-py3.txt
  +++ b/test-requirements-py3.txt
  @@ -16 +16 @@ testtools=0.9.36,!=1.2.0
  -oslotest=1.2.0  # Apache-2.0
  +oslotest=1.5.1,1.6.0  # Apache-2.0
  @@ -25 +25 @@ sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
  -oslosphinx=2.2.0  # Apache-2.0
  +oslosphinx=2.5.0,2.6.0  # Apache-2.0
  diff --git a/test-requirements.txt b/test-requirements.txt
  index 0b2a583..5afaa74 100644
  --- a/test-requirements.txt
  +++ b/test-requirements.txt
  @@ -16 +16 @@ testtools=0.9.36,!=1.2.0
  -oslotest=1.2.0  # Apache-2.0
  +oslotest=1.5.1,1.6.0  # Apache-2.0
  @@ -34 +34 @@ sphinx=1.1.2,!=1.2.0,!=1.3b1,1.3
  -oslosphinx=2.2.0  # Apache-2.0
  +oslosphinx=2.5.0,2.6.0  # Apache-2.0
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] introduce Request-Reply pattern to improve the stability

2015-04-01 Thread Li Ma
Great. I'm just doing some experiments to evaluate REQ/REP pattern.

It seems that your implementation is completed.

Looking forward to reviewing your updates.

On Mon, Mar 30, 2015 at 4:02 PM, ozamiatin ozamia...@mirantis.com wrote:
 Hi,

 Sorry for not replying on [1] comments too long.
 I'm almost ready to return to the spec with updates.

 The main lack of current zmq-driver implementation is that
 it manually implements REQ/REP on top of PUSH/PULL.

 It results in:

 1. PUSH/PULL is one way directed socket (reply needs another connection)
 We need to support backwards socket pipeline (two pipelines). In REQ/REP
 we have it all in one socket pipeline.

 2. Supporting delivery of reply over second pipeline (REQ/REP state
 machine).

 I would like to propose such socket pipeline:
 rpc_client(REQ(tcp)) = proxy_frontend(ROUTER(tcp)) =
 proxy_backend(DEALER(ipc)) = rpc_server(REP(ipc))

 ROUTER and DEALER are asynchronous substitution for REQ/REP for building 1-N
 and N-N
 topologies, and they don't break the pattern.

 Recommended pipeline nicely matches for CALL.
 However CAST can also be implemented over REQ/REP, using
 reply as message delivery acknowledgement, but not returning it to caller.
 Listening to reply for CAST in background thread keeps it async as well.

 Regards,
 Oleksii Zamiatin


 On 30.03.15 06:39, Li Ma wrote:

 Hi all,

 I'd like to propose a simple but straightforward method to improve the
 stability of the current implementation.

 Here's the current implementation:

 receiver(PULL(tcp)) -- service(PUSH(tcp))
 receiver(PUB(ipc)) -- service(SUB(ipc))
 receiver(PUSH(ipc)) -- service(PULL(ipc))

 Actually, as far as I know, the local IPC method is much more stable
 than network. I'd like to switch PULL/PUSH to REP/REQ for TCP
 communication.

 The change is very simple but effective for stable network
 communication. I cannot apply the patch for our production systems. I
 tried it in my lab, and it works well.

 I know there's another blueprint for REP/REQ pattern [1], but it's not
 the same, I think.

 I'd like to discuss it about how to take advantage of REP/REQ of zeromq.

 [1]
 https://review.openstack.org/#/c/154094/2/specs/kilo/zmq-req-rep-call.rst

 Best regards,





-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][zeromq] introduce Request-Reply pattern to improve the stability

2015-03-29 Thread Li Ma
Hi all,

I'd like to propose a simple but straightforward method to improve the
stability of the current implementation.

Here's the current implementation:

receiver(PULL(tcp)) -- service(PUSH(tcp))
receiver(PUB(ipc)) -- service(SUB(ipc))
receiver(PUSH(ipc)) -- service(PULL(ipc))

Actually, as far as I know, the local IPC method is much more stable
than network. I'd like to switch PULL/PUSH to REP/REQ for TCP
communication.

The change is very simple but effective for stable network
communication. I cannot apply the patch for our production systems. I
tried it in my lab, and it works well.

I know there's another blueprint for REP/REQ pattern [1], but it's not
the same, I think.

I'd like to discuss it about how to take advantage of REP/REQ of zeromq.

[1] https://review.openstack.org/#/c/154094/2/specs/kilo/zmq-req-rep-call.rst

Best regards,
-- 
Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Li Ma
On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com wrote:
 The goal we set at the Kilo summit was to have a group of people
 interested in zmq start contributing to the driver, and I had hoped to
 the library overall. How do we feel that is going?

That sounds great. I hope so.

 One way to create a separate group to manage the zmq driver is to move
 it to a separate repository. Is the internal API for messaging drivers
 stable enough to do that?

Actually I'm not intended to move it to a separate repository. I just
want to make sure if it is possible to make a fixed online meeting for
zmq driver.

-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-24 Thread Li Ma
By the way, just informing that a general session will be available
for zeromq driver. I'll provide the general architecture of the
current zeromq driver, pros and cons, potential improvements, use
cases for production.

Topic: Distributed Messaging System for OpenStack at Scale
Link: http://sched.co/2qpe

On Wed, Mar 25, 2015 at 9:31 AM, Doug Hellmann d...@doughellmann.com wrote:
 Excerpts from ozamiatin's message of 2015-03-24 18:57:25 +0200:
 Hi,
 +1 for subgroup meeting

 Does the separate repository mean separate library (python package) with
 its own release cycles so on?

 Yes, although as an Oslo library it would be subject to our existing
 policies about versioning, releases, etc.


 As I can see the separate library makes it easy:

 1) To support optional (for oslo.messaging) requirements specific for
 zmq driver like pyzmq, redis so on
 2) Separate zmq testing. Now we have hacks like skip_test_if_nozmq or
 something like that.

 Disadvantages are:
 1) Synchronization changes with oslo.messaging (Changes to the
 oslo.messaging API may break all things)

 That's a good point. I think the neutron team is using a shim layer
 in-tree to mitigate driver API changes, with most of the driver
 implementation in a separate repository. Doing something like that here
 might make sense.

 That said, a separate repository is only one possible approach.
 Since most of the other Oslo cores seem to not like the idea of
 splitting the driver out, so we shouldn't assume it's going to
 happen.

 2) Additional effort for separate library management (releases so on)

 As for me, I like the idea of separate repo for zmq driver because it
 gives more freedom for driver extension.
 There are some ideas that we can have more than a single zmq driver
 implementation in future.
 At least we may have different versions one for HA and one for
 scalability based on different zmq patterns.


 Thanks,
 Oleksii Zamiatin

 On 24.03.15 18:03, Ben Nemec wrote:
  On 03/24/2015 10:31 AM, Li Ma wrote:
  On Mon, Mar 23, 2015 at 9:24 PM, Doug Hellmann d...@doughellmann.com 
  wrote:
  The goal we set at the Kilo summit was to have a group of people
  interested in zmq start contributing to the driver, and I had hoped to
  the library overall. How do we feel that is going?
  That sounds great. I hope so.
 
  One way to create a separate group to manage the zmq driver is to move
  it to a separate repository. Is the internal API for messaging drivers
  stable enough to do that?
  Actually I'm not intended to move it to a separate repository. I just
  want to make sure if it is possible to make a fixed online meeting for
  zmq driver.
  And personally I'd prefer not to split the repo.  I'd rather explore the
  idea of driver maintainers whose +1 on driver code counts as +2, like we
  had/have with incubator.  Splitting the repo brings up some sticky
  issues with requirements syncs and such.  I'd like to think that with
  only three different drivers we don't need the overhead of managing
  separate repos, but maybe I'm being optimistic. :-)
 
  Kind of off topic since that's not what is being proposed here, but two
  different people have mentioned it so I wanted to note my preference in
  case it comes up again.
 
  -Ben
 
  __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][zeromq] 'Subgroup' for broker-less ZeroMQ driver

2015-03-23 Thread Li Ma
Hi all,

During previous threads discussing about zeromq driver, a subgroup may
be necessary to exchange knowledge and improve efficiency of
communication and development. In this subgroup, we can schedule a
given topic or just discuss some re-factoring stuff or bugs in irc
room at a fixed time.

Actually I'm not sure whether it is suitable to call it a 'subgroup'.
Besides, if it is possible, we need to figure out the suitable meeting
time and irc channel. Please give advice.

Best regards,
-- 
Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-21 Thread Li Ma
On Tue, Mar 10, 2015 at 8:14 PM, ozamiatin ozamia...@mirantis.com wrote:
 Hi Li Ma,

 Thank you very much for your reply


 It's good to hear you have a living deployment with zmq driver.
 Is there a big divergence between your production and upstream versions
 of the driver? Besides [1] and [2] fixes for redis we have [5] and [6]
 critical multi-backend related issues for using the driver in real-world
 deployment.

Actually there's no such a big divergence between our driver and
upstream version. We didn't refactor it much, but just fixed all the
bugs that we met before and implemented socket reuse mechanism to
greatly improve its performance. For some bugs available, especially
cinder multi-backend and neutron multi-worker, we hacked cinder and
neutron to get rid of these bugs.

I discussed with our cinder developer several times about these
problems you mentioned above. Due to the current architecture, it is
really difficult to fix it in zeromq driver. However, it is very easy
to deal with it in cinder. We have patches on hand, but the
implementation is a little tricky that the upstream may not accept it.
:-( No worry, I'll find out it soon.

By the way, we are discussing about fanout performance and message
persistance. I don't have codes available, but I've got some ideas to
implement it.


 The only functionality for large-scale deployment that lacks in the
 current upstream codebase is socket pool scheduling (to provide
 lifecycle management, including recycle and reuse zeromq sockets). It
 was done several months ago and we are willing to contribute. I plan
 to propose a blueprint in the next release.

 Pool, recycle and reuse sounds good for performance.

Yes, actually our implementation is a little ugly and there's no unit
test available. Right now, I'm trying to refactor it and hopefully
I'll submit a spec soon.

 We also need a refactoring of the driver to reduce redundant entities
 or reconsider them (like ZmqClient or InternalContext) and to reduce code
 replications (like with topics).
 There is also some topics management needed.
 Clear code == less bugs == easy understand == easy contribute.
 We need a discussion (with related spec and UMLs) about what the driver
 architecture should be (I'm in progress with that).

+1, cannot agree with you more.

 3. ZeroMQ integration

 I've been working on the integration of ZeroMQ and DevStack for a
 while and actually it is working right now. I updated the deployment
 guide [3].

 That's true it works! :)

 I think it is the time to bring a non-voting gate for ZeroMQ and we
 can make the functional tests work.

 You can turn it with 'check experimental'. It is broken now.

I'll figure it out soon.

 5. ZeroMQ discussion

 Here I'd like to say sorry for this driver. Due to spare time and
 timezone, I'm not available for IRC or other meeting or discussions.
 But if it is possible, should we create a subgroup for ZeroMQ and
 schedule meetings for it? If we can schedule in advance or at a fixed
 date  time, I'm in.

 That's great idea
 +1 for zmq subgroup and meetings

I'll open another thread to discuss this topic.


 Subfolder is actually what I mean (python package like '_drivers')
 it should stay in oslo.messaging. Separate package like
 oslo.messaging.zeromq is overkill.
 As Doug proposed we can do consistently to AMQP-driver.

I suggest you go for it right now. It is really important for further
development.
If I submit new codes based upon the current code structure, it will
greatly affect this work in the future.

Best regards,
-- 
Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][messaging][zmq] Discussion on zmq driver design issues

2015-03-05 Thread Li Ma
Hi all, actually I'm writing the same mail topic for zeromq driver,
but I haven't done it yet. Thank you for proposing this topic,
ozamiatin.

1. ZeroMQ functionality

Actually I proposed a session topic in the coming summit to show our
production system, named 'Distributed Messaging System for OpenStack
at Scale'. I don't know whether it will be allowed to present.
Otherwise, if it is possible, I can share my experience in design
summit.

Currently, AWCloud (the company I'm working) deployed more than 20
private clouds and 3 public clouds for our customers in production,
scaling from 10 to 500 physical nodes without any performance issue.
The performance dominates all the existing drivers in every aspect.
All is using ZeroMQ driver. We started improving ZeroMQ driver in
Icehouse and currently the modified driver has switched to
oslo.messaging.

As all knows, ZeroMQ has been unmaintainable for long. My colleagues
and I continuously contribute patches to upstream. The progress is a
little bit slow because we are doing everything just in our spare time
and the review procedure is also not efficient.

Here are two important patches [1], [2], for matchmaker redis. When
they are landed, I think ZeroMQ driver is capable of running in small
deployments.

The only functionality for large-scale deployment that lacks in the
current upstream codebase is socket pool scheduling (to provide
lifecycle management, including recycle and reuse zeromq sockets). It
was done several months ago and we are willing to contribute. I plan
to propose a blueprint in the next release.

2. Why ZeroMQ matters for OpenStack

ZeroMQ is the only driver that depends on a stable library not an open
source product. This is the most important thing that comes up my
mind. When we deploy clouds with RabbitMQ or Qpid, we need
comprehensive knowledge from their community, from deployment best
practice to performance tuning for different scales. As an open source
product, no doubt that bugs are always there. You need to push lots of
things in different communities rather than OpenStack community.
Finally, it is not that working, you all know it, right?

ZeroMQ library itself is just encapsulation of sockets and is stable
enough and widely used in large-scale cluster communication for long.
We can build our own messaging system for inter-component RPC. We can
improve it for OpenStack and have the governance for codebase. We
don't need to rely on different products out of the community.
Actually, only ZeroMQ provides the possibility.

IMO, we can just keep it and improve it and finally it becomes another
choice for operators.

3. ZeroMQ integration

I've been working on the integration of ZeroMQ and DevStack for a
while and actually it is working right now. I updated the deployment
guide [3].

I think it is the time to bring a non-voting gate for ZeroMQ and we
can make the functional tests work.

4. ZeroMQ blueprints

We'd love to provide blueprints to improve ZeroMQ, as ozamiatin does.
According to my estimation, ZeroMQ can be another choice for
production in 1-2 release cycles due to bp review and patch review
procedure.

5. ZeroMQ discussion

Here I'd like to say sorry for this driver. Due to spare time and
timezone, I'm not available for IRC or other meeting or discussions.
But if it is possible, should we create a subgroup for ZeroMQ and
schedule meetings for it? If we can schedule in advance or at a fixed
date  time, I'm in.

6. Feedback to ozamiatin's suggestions

I'm with you in most all the proposals, but for packages, I think we
can just separate all the components in a sub-directory. This step is
enough at the current stage.

Packaging the components are complicated. I don't think it is possible
for oslo.messaging to break into two packages, like oslo.messaging and
oslo.messaging.zeromq. And I cannot see the benefit clearly.

For priorities, I think the number 1, 6 and 7 have the high priority,
especially 7. Because ZeroMQ is pretty new for everyone, we do need
more paper work to promote and introduce it to the community. By the
way, I made a wiki before and everyone is welcome to update it [4].

[1] https://review.openstack.org/#/c/152471/
[2] https://review.openstack.org/#/c/155673/
[3] https://review.openstack.org/#/c/130943/
[4] https://wiki.openstack.org/wiki/ZeroMQ

Thanks a lot,
Li Ma (Nick)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ZeroMQ topic object.

2015-02-04 Thread Li Ma
Sorry for the late reply. Your proposal is interesting. According to
the previous discussion of works on zeromq driver, we should first
make zeromq driver completed for all the OpenStack projects and CI.
During these days, lots of patches are submitted to upstream. But some
critical ones are still in review, [1], [2]. I'm also working on
devstack support and documentation [3-5]. So, IMO we should make it
maintainable in the current release cycle. And we can try to improve
it in the next release cycle.

[1] https://review.openstack.org/#/c/142651/
[2] https://review.openstack.org/#/c/150286/
[3] https://review.openstack.org/#/c/143533/
[4] https://review.openstack.org/#/c/145722/
[5] https://review.openstack.org/#/c/130943/

Regards,
Nick

On Fri, Dec 26, 2014 at 10:58 PM, Ilya Pekelny ipeke...@mirantis.com wrote:
 Hi, all!

 Unexpectedly I met a pretty significant issue when I have been solving a
 small bug
 https://bugs.launchpad.net/ubuntu/+source/oslo.messaging/+bug/1367614. The
 problem is in several parts:

 * Topics used for several purposes: to set subscriptions and to determine a
 type of sockets.
 * Topics is a strings which are modifying inline everywhere where it is
 needed. So, the topic feature is very distributed and uncoordinated.

 My issue with the bug was: It is impossible just hash topic somewhere and
 not to crash all the driver.  Second part of the issue is: It is very
 painful process to trace all the topic modifications which are distributed
 though all the driver code.

 After several attempts to fix the bug with small losses I concluded that I
 need to create a control/entry point for topic string. Now I have a
 proposal:

 Blueprint —
 https://blueprints.launchpad.net/oslo.messaging/+spec/zmq-topic-object
 Spec — https://review.openstack.org/#/c/144149/
 Patch — https://review.openstack.org/#/c/144120/

 I want to discuss this feature and receive a feedbacks from a more
 experienced rpc-Jedi.

 Thanks!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zmq] Redundant zmq.Context creation

2015-02-04 Thread Li Ma
Any news here? Per-socket solution is a conservative solution that
makes zeromq driver work for multiple-workers. Neutron-server has
api-worker and rpc-worker. I'm not sure per-driver is applicable. I
will try to figure it out soon.

On Fri, Jan 23, 2015 at 7:53 PM, Oleksii Zamiatin
ozamia...@mirantis.com wrote:
 23.01.15, 13:22, Elena Ezhova пишет:



 On Fri, Jan 23, 2015 at 1:55 PM, Ilya Pekelny ipeke...@mirantis.com wrote:



 On Fri, Jan 23, 2015 at 12:46 PM, ozamiatin ozamia...@mirantis.com
 wrote:

 IMHO It should be created once per Reactor/Client or even per driver
 instance.


 Per driver, sounds good.

 Wouldn't this create regression for Neutron? The original change was
 supposed to fix the bug [1], where each api-worker process got the same copy
 of the Context due to its singletony nature.

 It wouldn't be a singleton now, beacuse each process should have it's own
 driver instance. We of course will check this case. Each api-worker should
 take their own context. The purpose now is to have not more than one context
 per worker.




 By the way (I didn't check it yet with current implementation of the
 driver) such approach should break the IPC, because such kind of sockets
 should be produced from the same context.


 Please, check it. Looks like a potential bug.

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 [1] https://bugs.launchpad.net/neutron/+bug/1364814


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

Li Ma (Nick)
Email: skywalker.n...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps

2014-12-19 Thread Li Ma


On 2014/12/9 22:07, Doug Hellmann wrote:

On Dec 8, 2014, at 11:25 PM, Li Ma skywalker.n...@gmail.com wrote:


Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots 
of problems, like dependencies, topics, matchmaker setup, etc. I've already 
registered a blueprint for devstack-zeromq [1].

I added the [devstack] tag to the subject of this message so that team will see 
the thread.


Besides, I suggest to build a wiki page in order to trace all the workitems related 
with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs  Reviews], 
[Future Plan  Blueprints], [Discussions], [Resources], etc.

Coordinating the work on this via a wiki page makes sense. Please post the link 
when you’re ready.

Doug

Hi all, I collected the current status of ZeroMQ driver and posted a 
wiki link [1] for them.


For those bugs that marked as Critical  High, as far as I know, some 
developers are working on them.  Patches are coming soon. BTW, I'm also 
working on the devstack support. Hope to land everything in the kilo cycle.


[1] https://wiki.openstack.org/wiki/ZeroMQ


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][devstack] ZeroMQ driver maintenance next steps

2014-12-10 Thread Li Ma


On 2014/12/9 22:07, Doug Hellmann wrote:

On Dec 8, 2014, at 11:25 PM, Li Ma skywalker.n...@gmail.com wrote:


Hi all, I tried to deploy zeromq by devstack and it definitely failed with lots 
of problems, like dependencies, topics, matchmaker setup, etc. I've already 
registered a blueprint for devstack-zeromq [1].

I added the [devstack] tag to the subject of this message so that team will see 
the thread.

Thanks for helping fix this critical bug, Doug. :-)

@devstack:
Currently, I cannot find any devstack-specs related repo for proposing 
blueprint details. So, If any devstack guys here, please help review 
this blueprint [1] and welcome to leave any comments. This is really 
important for us.


Actually, I thought that I could provide some bug fix to make it work, 
but after evaluation, I would like to make it as a blueprint, because 
there are lots of works and a blueprint is suitable for it to trace 
everything.


[1] https://blueprints.launchpad.net/devstack/+spec/zeromq


Besides, I suggest to build a wiki page in order to trace all the workitems related 
with ZeroMQ. The general sections may be [Why ZeroMQ], [Current Bugs  Reviews], 
[Future Plan  Blueprints], [Discussions], [Resources], etc.

Coordinating the work on this via a wiki page makes sense. Please post the link 
when you’re ready.

Doug

OK. I'll get it done soon.

Any comments?

[1] https://blueprints.launchpad.net/devstack/+spec/zeromq

cheers,
Li Ma

On 2014/11/18 21:46, James Page wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 18/11/14 00:55, Denis Makogon wrote:

So if zmq driver support in devstack is fixed, we can easily add a
new job to run them in the same way.


Btw this is a good question. I will take look at current state of
zmq in devstack.

I don't think its that far off and its broken rather than missing -
the rpc backend code needs updating to use oslo.messaging rather than
project specific copies of the rpc common codebase (pre oslo).
Devstack should be able to run with the local matchmaker in most
scenarios but it looks like there was support for the redis matchmaker
as well.

If you could take some time to fixup that would be awesome!

- -- James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg
cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo
45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h
Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir
aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe
/cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40
TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB
P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb
4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD
bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu
Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB
tRDFb67u28jxnIXR16g=
=+k0M
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] ZeroMQ driver maintenance next steps

2014-12-08 Thread Li Ma
Hi all, I tried to deploy zeromq by devstack and it definitely failed 
with lots of problems, like dependencies, topics, matchmaker setup, etc. 
I've already registered a blueprint for devstack-zeromq [1].


Besides, I suggest to build a wiki page in order to trace all the 
workitems related with ZeroMQ. The general sections may be [Why ZeroMQ], 
[Current Bugs  Reviews], [Future Plan  Blueprints], [Discussions], 
[Resources], etc.


Any comments?

[1] https://blueprints.launchpad.net/devstack/+spec/zeromq

cheers,
Li Ma

On 2014/11/18 21:46, James Page wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 18/11/14 00:55, Denis Makogon wrote:


So if zmq driver support in devstack is fixed, we can easily add a
new job to run them in the same way.


Btw this is a good question. I will take look at current state of
zmq in devstack.

I don't think its that far off and its broken rather than missing -
the rpc backend code needs updating to use oslo.messaging rather than
project specific copies of the rpc common codebase (pre oslo).
Devstack should be able to run with the local matchmaker in most
scenarios but it looks like there was support for the redis matchmaker
as well.

If you could take some time to fixup that would be awesome!

- -- 
James Page

Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQIbBAEBCAAGBQJUa03HAAoJEL/srsug59jDdZQP+IeEvXAcfxNs2Tgvt5trnjgg
cnTrJPLbr6i/uIXKjRvNDSkJEdv//EjL/IRVRIf0ld09FpRnyKzUDMPq1CzFJqdo
45RqFWwJ46NVA4ApLZVugJvKc4tlouZQvizqCBzDKA6yUsUoGmRpYFAQ3rN6Gs9h
Q/8XSAmHQF1nyTarxvylZgnqhqWX0p8n1+fckQeq2y7s3D3WxfM71ftiLrmQCWir
aPkH7/0qvW+XiOtBXVTXDb/7pocNZg+jtBkUcokORXbJCmiCN36DBXv9LPIYgfhe
/cC/wQFH4RUSkoj7SYPAafX4J2lTMjAd+GwdV6ppKy4DbPZdNty8c9cbG29KUK40
TSCz8U3tUcaFGDQdBB5Kg85c1aYri6dmLxJlk7d8pOXLTb0bfnzdl+b6UsLkhXqB
P4Uc+IaV9vxoqmYZAzuqyWm9QriYlcYeaIJ9Ma5fN+CqxnIaCS7UbSxHj0yzTaUb
4XgmcQBwHe22ouwBmk2RGzLc1Rv8EzMLbbrGhtTu459WnAZCrXOTPOCn54PoIgZD
bK/Om+nmTxepWD1lExHIYk3BXyZObxPO00UJHdxvSAIh45ROlh8jW8hQA9lJ9QVu
Cz775xVlh4DRYgenN34c2afOrhhdq4V1OmjYUBf5M4gS6iKa20LsMjp7NqT0jzzB
tRDFb67u28jxnIXR16g=
=+k0M
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-19 Thread Li Ma

Hi Yatin,

Thanks for sharing your presentation. That looks great. Welcome to 
contribute to ZeroMQ driver.


Cheers,
Li Ma

On 2014/11/19 12:50, yatin kumbhare wrote:

Hello Folks,

Couple of slides/diagrams, I documented it for my understanding way 
back for havana release. Particularly slide no. 10 onward.


https://docs.google.com/presentation/d/1ZPWKXN7dzXs9bX3Ref9fPDiia912zsHCHNMh_VSMhJs/edit#slide=id.p

I am also committed to using zeromq as it's light-weight/fast/scalable.

I would like to chip in for further development regarding zeromq.

Regards,
Yatin

On Wed, Nov 19, 2014 at 8:05 AM, Li Ma skywalker.n...@gmail.com 
mailto:skywalker.n...@gmail.com wrote:



On 2014/11/19 1:49, Eric Windisch wrote:


I think for this cycle we really do need to focus on
consolidating and
testing the existing driver design and fixing up the biggest
deficiency (1) before we consider moving forward with lots of new


+1

1) Outbound messaging connection re-use - right now every
outbound
messaging creates and consumes a tcp connection - this
approach scales
badly when neutron does large fanout casts.



I'm glad you are looking at this and by doing so, will understand
the system better. I hope the following will give some insight
into, at least, why I made the decisions I made:
This was an intentional design trade-off. I saw three choices
here: build a fully decentralized solution, build a
fully-connected network, or use centralized brokerage. I wrote
off centralized brokerage immediately. The problem with a fully
connected system is that active TCP connections are required
between all of the nodes. I didn't think that would scale and
would be brittle against floods (intentional or otherwise).

IMHO, I always felt the right solution for large fanout casts was
to use multicast. When the driver was written, Neutron didn't
exist and there was no use-case for large fanout casts, so I
didn't implement multicast, but knew it as an option if it became
necessary. It isn't the right solution for everyone, of course.


Using multicast will add some complexity of switch forwarding
plane that it will enable and maintain multicast group
communication. For large deployment scenario, I prefer to make
forwarding simple and easy-to-maintain. IMO, run a set of
fanout-router processes in the cluster can also achieve the goal.
The data path is: openstack-daemon send the message (with
fanout=true) - fanout-router -read the
matchmaker-- send to the destinations
Actually it just uses unicast to simulate multicast.

For connection reuse, you could manage a pool of connections and
keep those connections around for a configurable amount of time,
after which they'd expire and be re-opened. This would keep the
most actively used connections alive. One problem is that it
would make the service more brittle by making it far more
susceptible to running out of file descriptors by keeping
connections around significantly longer. However, this wouldn't
be as brittle as fully-connecting the nodes nor as poorly scalable.


+1. Set a large number of fds is not a problem. Because we use
socket pool, we can control and keep the fixed number of fds.

If OpenStack and oslo.messaging were designed specifically around
this message pattern, I might suggest that the library and its
applications be aware of high-traffic topics and persist the
connections for those topics, while keeping others ephemeral. A
good example for Nova would be api-scheduler traffic would be
persistent, whereas scheduler-compute_node would be ephemeral. 
Perhaps this is something that could still be added to the library.


2) PUSH/PULL tcp sockets - Pieter suggested we look at
ROUTER/DEALER
as an option once 1) is resolved - this socket type pairing
has some
interesting features which would help with resilience and
availability
including heartbeating. 



Using PUSH/PULL does not eliminate the possibility of being fully
connected, nor is it incompatible with persistent connections. If
you're not going to be fully-connected, there isn't much
advantage to long-lived persistent connections and without those
persistent connections, you're not benefitting from features such
as heartbeating.


How about REQ/REP? I think it is appropriate for long-lived
persistent connections and also provide reliability due to reply.

I'm not saying ROUTER/DEALER cannot be used, but use them with
care. They're designed for long-lived channels between hosts and
not for the ephemeral-type connections used in a peer-to-peer
system. Dealing with how to manage timeouts on the client and the
server and the swelling number of active file

Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-18 Thread Li Ma


On 2014/11/17 23:03, Eric Windisch wrote:


On Mon, Nov 17, 2014 at 5:44 AM, Ilya Pekelny ipeke...@mirantis.com 
mailto:ipeke...@mirantis.com wrote:


We want to discuss opportunity of implementation of the p-2-p
messaging model in oslo.messaging for ZeroMQ driver. Actual
architecture uses uncharacteristic single broker architecture
model. In this way we are ignoring the key 0MQ ideas. Lets
describe our message in quotes from ZeroMQ documentation:


The oslo.messaging driver is not using a single broker. It is designed 
for a distributed broker model where each host runs a broker. I'm not 
sure where the confusion comes from that implies this is a 
single-broker model?


All of the points you make around negotiation and security are new 
concepts introduced after the initial design and implementation of the 
ZeroMQ driver. It certainly makes sense to investigate what new 
features are available in ZeroMQ (such as CurveCP) and to see how they 
might be leveraged.


That said, quite a bit of trial-and-error and research went into 
deciding to use an opposing PUSH-PULL mechanism instead of REQ/REP. 
Most notably, it's much easier to make PUSH/PULL reliable than REQ/REP.
Hi Eric. In our production deployment, We rely on ZeroMQ driver much 
because we have more than one thousand physical machines to run 
OpenStack. The distributed nature of ZeroMQ provide more scalable 
messaging plane than RabbitMQ.


However, as you know, the driver codebase is not that mature and stable. 
The first problem we faced is that we cannot guarantee the status when 
delivering messages via PULL/PUSH. Because there's no ACK for this 
model. We are discussing the possibility to change from PULL/PUSH to 
REQ/REP. But here you said that it is much easier to make PUSH/PULL 
reliable than REQ/REP. Do you have some idea or concern to improve the 
reliability of the current model?


cheers,
Li Ma
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-18 Thread Li Ma

On 2014/11/17 22:26, Russell Bryant wrote:

On 11/17/2014 05:44 AM, Ilya Pekelny wrote:

Hi, all!

We want to discuss opportunity of implementation of the p-2-p messaging
model in oslo.messaging for ZeroMQ driver.

On a related note, have you looked into AMQP 1.0 at all?  I have been
hopeful about the development to support it because of these same reasons.

The AMQP 1.0 driver is now merged.  I'd really like to see some work
around trying it out with the dispatch router [1].  It seems like using
amqp 1.0 + a distributed network of disaptch routers could be a very
scalable approach.  We still need to actually try it out and do some
scale and performance testing, though.

[1] http://qpid.apache.org/components/dispatch-router/

The design of dispatch-router is appealing. However I have some concern 
about qpid and the library. Is it reliable? Is there any success story 
of large deployment scenario about it?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zero MQ remove central broker. Architecture change.

2014-11-18 Thread Li Ma

On 2014/11/17 18:44, Ilya Pekelny wrote:

Hi, all!

We want to discuss opportunity of implementation of the p-2-p 
messaging model in oslo.messaging for ZeroMQ driver. Actual 
architecture uses uncharacteristic single broker architecture model. 
In this way we are ignoring the key 0MQ ideas. Lets describe our 
message in quotes from ZeroMQ documentation:


  * ZeroMQ has the core technical goals of simplicity and scalability,
the core social goal of gathering together the best and brightest
minds in distributed computing to build real, lasting solutions,
and the political goal of breaking the old hegemony of
centralization, as represented by most existing messaging systems
prior to ZeroMQ.
  * The ZeroMQ Message Transport Protocol (ZMTP) is a transport layer
protocol for exchanging messages between two peers over a
connected transport layer such as TCP.
  * The two peers agree on the version and security mechanism of the
connection by sending each other data and either continuing the
discussion, or closing the connection.
  * The two peers handshake the security mechanism by exchanging zero
or more commands. If the security handshake is successful, the
peers continue the discussion, otherwise one or both peers closes
the connection.
  * Each peer then sends the other metadata about the connection as a
final command. The peers may check the metadata and each peer
decides either to continue, or to close the connection.
  * Each peer is then able to send the other messages. Either peer may
at any moment close the connection.

From the current code docstring:

ZmqBaseReactor(ConsumerBase):
A consumer class implementing a centralized casting broker 
(PULL-PUSH).
Hi, Ilya, thanks for raising this topic. Inline you discussed about the 
ZeroMQ nature, but I still cannot find any directions to how to refactor 
or redesign the ZeroMQ driver for olso.messaging. :- Could you provide 
more details about how you think of it?


This approach is pretty unusual for ZeroMQ. Fortunately we have a bit 
of raw developments around the problem. These changes can introduce 
performance improvement. But to proof it we need to implement all new 
features, at least at WIP status. So, I need to be sure that the 
community doesn't avoid such of improvements.
For community works, AFAIK, we'd first initialize CI for ZeroMQ. After 
that, we can work together on how to improve performance, reliability 
and scalability of ZeroMQ driver.


cheers,
Li Ma
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Query regarding vRouters Openstack

2014-11-16 Thread Li Ma

Here are several hints:

1. Brocade has implemented a service plugin [1] for vRouter using its 
own vyatta image(seems not free and open sourced), but you can read the 
plugin source code as reference.


2. Here's a new method [2] for L3 fabric. It's totally open source and 
can use any routing stack, like quagga or bird.


I've tested the second method. It works pretty well.
[1] 
https://github.com/openstack/neutron/tree/master/neutron/services/l3_router/brocade
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2014-October/049150.html


cheers,
Li Ma

On 2014/11/17 11:53, Vikram Choudhary wrote:


Hi All,

Can someone please help in clarifying below doubts regarding vRouter 
(say vyyata/quagga for example).


How we can use openstack framework for communicating to vRouters?

To be more precise:

·How we can make the communication between neutron and vRouter possible?

·How we can push vRouter related configuration using neutron?

It will be great if you can help us with above queries.

Thanks

Vikram



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-10-11 Thread Li Ma


On 2014/9/17 22:34, Doug Hellmann wrote:


The documentation in the oslo.messaging repository [2] would be a good place to 
start for that. If we decide deployers/operators need the information we can 
either refer to it from the guides managed by the documentation team or we can 
move/copy the information. How about if you start a new drivers subdirectory 
there, and add information about zmq. We can have other driver authors provide 
similar detail about their drivers in the same directory.

[2] http://git.openstack.org/cgit/openstack/oslo.messaging/tree/doc/source
Hi all, I wrote a deployment guide of ZeroMQ for oslo.messaging, which 
is located at


https://github.com/li-ma/zmq-for-oslo

Do I need to issue a bug or propose a blueprint to trace and merge it to 
oslo.messaging doc tree?



3) an analysis of what it would take to be able to run functional
tests for zeromq on our CI infrastructure, not necessarily the full
tempest run or devstack-gate job, probably functional tests we place
in the tree with the driver (we will be doing this for all of the
drivers) + besides writing new functional tests, we need to bring the
unit tests for zeromq into the oslo.messaging repository

Kapil Thangavelu started work on both functional tests for the ZMQ
driver last week; the output from the sprint is here:

https://github.com/ostack-musketeers/oslo.messaging

it covers the ZMQ driver (including messaging through the zmq-receiver
proxy) and the associated MatchMakers (local, ring, redis) at a
varying levels of coverage, but I feel it moves things in the right
direction - Kapil's going to raise a review for this in the next
couple of days.

Doug - has any structure been agreed within the oslo.messaging tree
for unit/functional test splits? Right now we have them all in one place.

I think we will want them split up, but we don’t have an agreed existing 
structure for that. I would like to see a test framework of some sort that 
defines the tests in a way that can be used to run the same functional for all 
of the drivers as separate jobs (with appropriate hooks for ensuring the needed 
services are running, etc.). Setting that up warrants its own spec, because 
there are going to be quite a few details to work out. We will also need to 
participate in the larger conversation about how to set up those functional 
test jobs to be consistent with the other projects.
That's good to hear someone working on the test stuff. I suggest to deal 
with the unit tests first for ZeroMQ.


Anyway, are there any sessions related to this topic in the summit?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] zeromq work for kilo

2014-09-20 Thread Li Ma

Hi all, I almost lost this new thread for discussing with ZeroMQ issue.

On 2014/9/19 5:29, Eric Windisch wrote:


I believe it makes good sense for all drivers, in the long term. 
However, the most immediate benefits would be in offloading any 
drivers that need substantial work or improvements, aka velocity. That 
would mean the AMQP and ZeroMQ drivers.


It's very interesting. If we separate all the drivers out of the main 
framework, we can have different review groups working on sub-projects 
and people can work with the sub-group according to their specialty and 
preference. It can greatly improve the quality of reviewing and speed up 
the process as well.


Another thing I'll note is that before pulling Ironic in, Nova had an 
API contract test. This can be useful for making sure that changes in 
the upstream project doesn't break drivers, or that breakages could at 
least invoke action by the driver team: 
https://github.com/openstack/nova/blob/4ce3f55d169290015063131134f93fca236807ed/nova/tests/virt/test_ironic_api_contracts.py



Yes, it is necessary to make API consistent.

Cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-10 Thread Li Ma
Thanks for all the detailed analysis, Mike W, Mike B, and Roman.
 
For a production-ready database system, replication is a must I think. So, the 
questions are which replication mode is suitable for OpenStack and which way is 
suitable for OpenStack to improve performance and scalability of DB access.

In current implementation of database API in OpenStack, master/slave connection 
is defined for optimizing the performance. Developers of each OpenStack 
component take the responsibility of making use of it in the application 
context and some other guys take the responsibility of architecting database 
system to meet the requirements in various production environments. No general 
guideline for it. Actually, it is not that easy to determine which transaction 
is able to be conducted by slave due to data consistency and business logic for 
different OpenStack components.

The current status is that master/slave configuration is not widely used and 
only Nova uses slave connection in its periodic tasks which are not sensitive 
to the status of replication. Due to the nature of asynchronous replication, 
query to DB is not stable, so the risks of using slaves are apparent.

How about Galera multi-master cluster? As Mike Bayer said, it is virtually 
synchronous by default. It is still possible that outdated rows are queried 
that make results not stable.

When using such eventual consistency methods, you have to carefully design 
which transaction is tolerant of old data. AFAIK, no matter which component is, 
Nova, Cinder or Neutron, most of the transactions are not that 'tolerant'. As 
Mike Bayer said, consistent relational dataset is very important. As a 
footnote, consistent relational dataset is very important for OpenStack 
components. This is why only non-sensitive periodic tasks are using slaves in 
Nova.

Let's move forward to synchronous replication, like Galera with causal-reads 
on. The dominant advantage is that it has consistent relational dataset 
support. The disadvantage are that it uses optimistic locking and its 
performance sucks (also said by Mike Bayer :-). For optimistic locking problem, 
I think it can be dealt with by retry-on-deadlock. It's not the topic here.

If we first ignore the performance-suck problem, multi-master cluster with 
synchronous replication is the perfect for OpenStack with any masters+slaves 
enabled and it can truly scale-out.

So, the transparent read/write separation is dependent on such an environment. 
SQLalchemy tutorial provides code sample for it [1]. Besides, Mike Bayer also 
provides a blog post for it [2].

What I did is to re-implement it in OpenStack DB API modules in my development 
environment, using Galera cluster(causal-reads on). It has been running 
perfectly for more than a week. The routing session manager works well while 
maintaining data consistency.

Back to the performance-suck problem, theoretically causal-reads-on will 
definitely affect the overall performance of concurrent DB reads, but I cannot 
find any report(officially or unofficially) on 
causal-reads-performance-degradation. Actually in the production system of my 
company, the Galera performance is tuned via network round-trip time, network 
throughput, number of slave threads, keep-alive and wsrep flow control 
parameters.

All in all, firstly, transparent read/write separation is feasible using 
synchronous replication method. Secondly, it may help scale-out in large 
deployment without any code modification. Moreover, it needs fine-tuning (Of 
course, every production system needs it :-). Finally, I think if we can 
integrate it into oslo.db, it is a perfect plus for those who would like to 
deploy Galera (or other similar technology) as DB backend.

[1] 
http://docs.sqlalchemy.org/en/rel_0_9/orm/session.html#custom-vertical-partitioning
[2] 
http://techspot.zzzeek.org/2012/01/11/django-style-database-routers-in-sqlalchemy/
[3] Galera replication method: http://galeracluster.com/products/technology/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-10 Thread Li Ma
 not sure if I said that :).  I know extremely little about galera.

Hi Mike Bayer, I'm so sorry I mistake you from Mike Wilson in the last post. 
:-) Also, say sorry to Mike Wilson.

 I’d totally guess that Galera would need to first have SELECTs come from a 
 slave node, then the moment it sees any kind of DML / writing, it 
 transparently switches the rest of the transaction over to a writer node.

You are totally right.

 
 @transaction.writer
 def read_and_write_something(arg1, arg2, …):
 # …
 
 @transaction.reader
 def only_read_something(arg1, arg2, …):
 # …

The first approach that I had in mind is the decorator-based method to 
separates read/write ops like what you said. To some degree, it is almost the 
same app-level approach to the master/slave configuration, due to transparency 
to developers. However, as I stated before, the current approach is merely used 
in OpenStack. Decorator is more friendly than use_slave_flag or something like 
that. If ideally transparency cannot be achieved, to say the least, 
decorator-based app-level switching is a great improvement, compared with the 
current implementation.

 OK so Galera would perhaps have some way to make this happen, and that's 
 great.

If any Galera expert here, please correct me. At least in my experiment, 
transactions work in that way.

 this (the word “integrate”, and what does that mean) is really the only thing 
 making me nervous.

Mike, just feel free. What I'd like to do is to add a django-style routing 
method as a plus in oslo.db, like:

[database]
# Original master/slave configuration
master_connection = 
slave_connection = 

# Only Support Synchronous Replication
enable_auto_routing = True

[db_cluster]
master_connection = 
master_connection = 
...
slave_connection = 
slave_connection = 
...

HOWEVER, I think it needs more investigation, so this is why I'd like to put it 
in the mailing list in the early stage to raise some discussions in depth. I'm 
not a Galera expert. I really appreciate any challenges here.

Thanks,
Li Ma


- Original Message -
From: Mike Bayer mba...@redhat.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期日, 2014年 8 月 10日 下午 11:57:47
Subject: Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation


On Aug 10, 2014, at 11:17 AM, Li Ma skywalker.n...@gmail.com wrote:

 
 How about Galera multi-master cluster? As Mike Bayer said, it is virtually 
 synchronous by default. It is still possible that outdated rows are queried 
 that make results not stable.

not sure if I said that :).  I know extremely little about galera.


 
 
 Let's move forward to synchronous replication, like Galera with causal-reads 
 on. The dominant advantage is that it has consistent relational dataset 
 support. The disadvantage are that it uses optimistic locking and its 
 performance sucks (also said by Mike Bayer :-). For optimistic locking 
 problem, I think it can be dealt with by retry-on-deadlock. It's not the 
 topic here.

I *really* don’t think I said that, because I like optimistic locking, and I’ve 
never used Galera ;).

Where I am ignorant here is of what exactly occurs if you write some rows 
within a transaction with Galera, then do some reads in that same transaction.  
 I’d totally guess that Galera would need to first have SELECTs come from a 
slave node, then the moment it sees any kind of DML / writing, it transparently 
switches the rest of the transaction over to a writer node.   No idea, but it 
has to be something like that?   


 
 
 So, the transparent read/write separation is dependent on such an 
 environment. SQLalchemy tutorial provides code sample for it [1]. Besides, 
 Mike Bayer also provides a blog post for it [2].

So this thing with the “django-style routers”, the way that example is, it 
actually would work poorly with a Session that is not in “autocommit” mode, 
assuming you’re working with regular old databases that are doing some simple 
behind-the-scenes replication.   Because again, if you do a flush, those rows 
go to the master, if the transaction is still open, then reading from the 
slaves you won’t see the rows you just inserted.So in reality, that example 
is kind of crappy, if you’re in a transaction (which we are) you’d really need 
to be doing session.using_bind(“master”) all over the place, and that is 
already way too verbose and hardcoded.   I’m wondering why I didn’t make a huge 
note of that in the post.  The point of that article was more to show that hey, 
you *can* control it at this level if you want to but you need to know what 
you’re doing.

Just to put it out there, this is what I think good high/level master/slave 
separation in the app level (reiterating: *if we want it in the app level at 
all*) should approximately look like:

@transaction.writer
def read_and_write_something(arg1, arg2, …):
# …

@transaction.reader
def only_read_something(arg1, arg2

[openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-07 Thread Li Ma
Getting a massive amount of information from data storage to be displayed is 
where most of the activity happens in OpenStack. The two activities of reading 
data and writing (creating, updating and deleting) data are fundamentally 
different.

The optimization for these two opposite database activities can be done by 
physically separating the databases that service these two different 
activities. All the writes go to database servers, which then replicates the 
written data to the database server(s) dedicated to servicing the reads.

Currently, AFAIK, many OpenStack deployment in production try to take 
advantage of MySQL (includes Percona or MariaDB) multi-master Galera cluster. 
It is possible to design and implement a read/write separation schema 
for such a DB cluster.

Actually, OpenStack has a method for read scalability via defining 
master_connection and slave_connection in configuration, but this method 
lacks of flexibility due to deciding master or slave in the logical 
context(code). It's not transparent for application developer. 
As a result, it is not widely used in all the OpenStack projects.

So, I'd like to propose a transparent read/write separation method 
for oslo.db that every project may happily takes advantage of it 
without any code modification.

Moreover, I'd like to put it in the mailing list in advance to 
make sure it is acceptable for oslo.db.

I'd appreciate any comments.

br.
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] port-forwarding for router

2014-07-18 Thread Li Ma
Hi folks,

I'd like to implement port-forwarding for router in l3-agent node, and 
meanwhile I noticed the related blueprint[1] has been there for long. 
Unfortunately, the code review[2] has been abandoned. Is there any latest news 
about this blueprint? AFAIK, this functionality is very important for operators 
and end-users.

[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
[2] https://review.openstack.org/#/c/60512/

Thanks,
Li Ma

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] port-forwarding for router

2014-07-18 Thread Li Ma
I'd like to using iptables for port-forwarding, because it is simple and 
straightforward.
I didn't notice the Tap-as-a-Service before, and it seems interesting.

Thanks,
Li Ma

- Original Message -
From: Baohua Yang yangbao...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期五, 2014年 7 月 18日 下午 3:52:16
Subject: Re: [openstack-dev] [Neutron] port-forwarding for router



Hi, ma li , 
And, do you target a flexible port-forwarding (like [1], which does mirroring) 
or something like DNAT (as in the [2])? 

If the latter, suggest you contact the bp owner to see if can work together. 


[1] https://review.openstack.org/#/c/96149/6 

[2] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding 




On Fri, Jul 18, 2014 at 2:57 PM, Li Ma  skywalker.n...@gmail.com  wrote: 


Hi folks, 

I'd like to implement port-forwarding for router in l3-agent node, and 
meanwhile I noticed the related blueprint [ 1] has been there for long. 
Unfortunately, the code review [ 2] has been abandoned. Is there any latest 
news about this blueprint? AFAIK, this functionality is very important for 
operators and end-users. 

[1] https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding 
[2] https://review.openstack.org/#/c/60512/ 

Thanks, 
Li Ma 

___ 
OpenStack -dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 




-- 
Best wishes! 
Baohua 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Li Ma
Hi Kevin,

Thanks for your reply. Actually, it is not that straightforward.
Even if postcommit is outside the 'with' statement, the transaction is not 
'truly' committed immediately. Because when I put my db code (reading and 
writing ml2-related tables) in postcommit, db lock wait exception is still 
thrown.

Li Ma

- Original Message -
From: Kevin Benton blak...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期三, 2014年 6 月 25日 下午 4:59:26
Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing 
new mechanism driver



The post_commit methods occur outside of the transactions. You should be able 
to perform the necessary database calls there. 


If you look at the code snippet in the email you provided, you can see that the 
'try' block surrounding the postcommit method is at the same indentation-level 
as the 'with' statement for the transaction so it will be closed at that point. 


Cheers, 
Kevin Benton 


-- 
Kevin Benton 



On Tue, Jun 24, 2014 at 8:33 PM, Li Ma  skywalker.n...@gmail.com  wrote: 


Hi all, 

I'm developing a new mechanism driver. I'd like to access ml2-related tables in 
create_port_precommit and create_port_postcommit. However I find it hard to do 
that because the two functions are both inside an existed database transaction 
defined in create_port function of ml2/plugin.py. 

The related code is as follows: 

def create_port(self, context, port): 
... 
session = context.session 
with session.begin(subtransactions=True): 
... 
self.mechanism_manager.create_port_precommit(mech_context) 
try: 
self.mechanism_manager.create_port_postcommit(mech_context) 
... 
... 
return result 

As a result, I need to carefully deal with the database nested transaction 
issue to prevent from db lock when I develop my own mechanism driver. Right 
now, I'm trying to get the idea behind the scene. Is it possible to refactor it 
in order to make precommit and postcommit out of the db transaction? I think it 
is perfect for those who develop mechanism driver and do not know well about 
the functioning context of the whole ML2 plugin. 

Thanks, 
Li Ma 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 




-- 

Kevin Benton 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Li Ma
Here's a code sample which can raise db lock wait exception:

def create_port_postcommit(self, context):

port_id = ...
with session.begin(subtransactions=True):
try:
binding = (session.query(models.PortBinding).
  filter(models.PortBinding.port_id.startswith(port_id)).
  one())
# Here I modify some attributes if port binding is existed
session.merge(query)
except exc.NoResultFound:
# Here I insert new port binding record to initialize some attributes
except ...
LOG.error(error happened)

The exception is as follows:
2014-06-25 10:05:17.195 9915 ERROR neutron.plugins.ml2.managers 
[req-961680da-ce69-43c6-974c-57132def411d None] Mechanism driver 'hello' failed 
in create_port_postcommit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers Traceback (most 
recent call last):
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/managers.py, line 158, 
in _call_on_drivers
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers/mech_hello.py, 
line 95, in create_port_postcommit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers {'port_id': 
port_id})
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 402, in __exit__
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self.commit()
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 314, in commit
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self._prepare_impl()
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 298, in _prepare_impl
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self.session.flush()

...

2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded; try 
restarting transaction') 'INSERT INTO ml2_port_bindings (port_id, host, 
vnic_type, profile, vif_type, vif_details, driver, segment) VALUES (%s, %s, %s, 
%s, %s, %s, %s, %s)' (...)

It seems that the transaction in the postcommit cannot be committed.

Thanks a lot,
Li Ma

- Original Message -
From: Li Ma skywalker.n...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期三, 2014年 6 月 25日 下午 6:21:10
Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing 
new mechanism driver

Hi Kevin,

Thanks for your reply. Actually, it is not that straightforward.
Even if postcommit is outside the 'with' statement, the transaction is not 
'truly' committed immediately. Because when I put my db code (reading and 
writing ml2-related tables) in postcommit, db lock wait exception is still 
thrown.

Li Ma

- Original Message -
From: Kevin Benton blak...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期三, 2014年 6 月 25日 下午 4:59:26
Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing 
new mechanism driver



The post_commit methods occur outside of the transactions. You should be able 
to perform the necessary database calls there. 


If you look at the code snippet in the email you provided, you can see that the 
'try' block surrounding the postcommit method is at the same indentation-level 
as the 'with' statement for the transaction so it will be closed at that point. 


Cheers, 
Kevin Benton 


-- 
Kevin Benton 



On Tue, Jun 24, 2014 at 8:33 PM, Li Ma  skywalker.n...@gmail.com  wrote: 


Hi all, 

I'm developing a new mechanism driver. I'd like to access ml2-related tables in 
create_port_precommit and create_port_postcommit. However I find it hard to do 
that because the two functions are both inside an existed database transaction 
defined in create_port function of ml2/plugin.py. 

The related code is as follows: 

def create_port(self, context, port): 
... 
session = context.session 
with session.begin(subtransactions=True): 
... 
self.mechanism_manager.create_port_precommit(mech_context) 
try: 
self.mechanism_manager.create_port_postcommit(mech_context) 
... 
... 
return result 

As a result, I need to carefully deal with the database nested transaction 
issue to prevent from db lock when I develop my own mechanism driver. Right 
now, I'm trying to get the idea behind the scene. Is it possible to refactor it 
in order to make

Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-25 Thread Li Ma
 neutron.plugins.ml2.managers 
execute(statement, multiparams)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1449, in execute
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers params)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1584, in _execute_clauseelement
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
compiled_sql, distilled_params
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1698, in _execute_context
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers context)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/base.py,
 line 1691, in _execute_context
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers context)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/engine/default.py,
 line 331, in do_execute
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
cursor.execute(statement, parameters)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py, line 173, in execute
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
self.errorhandler(self, exc, value)
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers   File 
/usr/lib64/python2.6/site-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers raise 
errorclass, errorvalue
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded; try 
restarting transaction') 'INSERT INTO ml2_port_bindings (port_id, host, 
vnic_type, profile, vif_type, vif_details, driver, segment) VALUES (%s, %s, %s, 
%s, %s, %s, %s, %s)' ('2f7996c2-7a60-4334-a71d-11e82285e272', '', 'normal', 
{'type': 1, 'priority': 2}, 'unbound', '', None, None)


Thanks,
Li Ma


- Original Message -
From: Kevin Benton blak...@gmail.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: 星期四, 2014年 6 月 26日 上午 10:32:47
Subject: Re: [openstack-dev] [Neutron ML2] Potential DB lock when developing 
new mechanism driver



What is in the variable named 'query' that you are trying to merge into the 
session? Can you include the full create_port_postcommit method? The line 
raising the exception ends with  {'port_id': port_id}) and that doesn't 
matching anything included in your sample. 



On Wed, Jun 25, 2014 at 6:53 PM, Li Ma  skywalker.n...@gmail.com  wrote: 


Here's a code sample which can raise db lock wait exception: 

def create_port_postcommit(self, context): 

port_id = ... 
with session.begin(subtransactions=True): 
try: 
binding = (session.query(models.PortBinding). 
filter(models.PortBinding.port_id.startswith(port_id)). 
one()) 
# Here I modify some attributes if port binding is existed 
session.merge(query) 
except exc.NoResultFound: 
# Here I insert new port binding record to initialize some attributes 
except ... 
LOG.error(error happened) 

The exception is as follows: 
2014-06-25 10:05:17.195 9915 ERROR neutron.plugins.ml2.managers 
[req-961680da-ce69-43c6-974c-57132def411d None] Mechanism driver 'hello' failed 
in create_port_postcommit 
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers Traceback (most 
recent call last): 
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers File 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/managers.py, line 158, 
in _call_on_drivers 
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context) 
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers File 
/usr/lib/python2.6/site-packages/neutron/plugins/ml2/drivers/mech_hello.py, 
line 95, in create_port_postcommit 
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers {'port_id': 
port_id}) 
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 402, in __exit__ 
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers self.commit() 
2014-06-25 10:05:17.195 9915 TRACE neutron.plugins.ml2.managers File 
/usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.8-py2.6-linux-x86_64.egg/sqlalchemy/orm/session.py,
 line 314, in commit 
2014-06-25 10:05:17.195 9915 TRACE

[openstack-dev] [Neutron ML2] Potential DB lock when developing new mechanism driver

2014-06-24 Thread Li Ma
Hi all,

I'm developing a new mechanism driver. I'd like to access ml2-related tables in 
create_port_precommit and create_port_postcommit. However I find it hard to do 
that because the two functions are both inside an existed database transaction 
defined in create_port function of ml2/plugin.py.

The related code is as follows:

def create_port(self, context, port):
...
session = context.session
with session.begin(subtransactions=True):
...
self.mechanism_manager.create_port_precommit(mech_context)
try:
self.mechanism_manager.create_port_postcommit(mech_context)
...
...
return result

As a result, I need to carefully deal with the database nested transaction 
issue to prevent from db lock when I develop my own mechanism driver. Right 
now, I'm trying to get the idea behind the scene. Is it possible to refactor it 
in order to make precommit and postcommit out of the db transaction? I think it 
is perfect for those who develop mechanism driver and do not know well about 
the functioning context of the whole ML2 plugin.

Thanks,
Li Ma

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 退出邮件列表

2014-04-21 Thread Li Ma (Nick)
Hi xueyan,

You can do it by yourself via the following link:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Li Ma

- Original Message -
From: l adolphxue...@163.com
To: openstack-dev@lists.openstack.org
Sent: 星期一, 2014年 4 月 21日 上午 10:56:08
Subject: [openstack-dev] 退出邮件列表



您好: 
由于一些情况,我想退出该邮件列表。谢谢! 


xueyan 
2014.4.21 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] _notify_port_updated in ML2 plugin doesn't take effect under some conditions

2014-03-17 Thread Li Ma
Hi stackers,

I'm trying to extend the capability of port by propagating
binding:profile from neutron-server to l2-agents.

When I issue update-port-api with a new binding:profile, I find that the
action is not notified to any agents. Then I check the code and find the
following function:

def _notify_port_updated(self, mech_context):
port = mech_context._port
segment = mech_context.bound_segment
if not segment:
# REVISIT(rkukura): This should notify agent to unplug port
network = mech_context.network.current
LOG.warning(_(In _notify_port_updated(), no bound segment for 
  port %(port_id)s on network %(network_id)s),
{'port_id': port['id'],
 'network_id': network['id']})
return
self.notifier.port_update(mech_context._plugin_context, port,
  segment[api.NETWORK_TYPE],
  segment[api.SEGMENTATION_ID],
  segment[api.PHYSICAL_NETWORK])

I'm not sure why it checks bound segment here to prevent sending
port_update out?
In my situation, I run a devstack environment and the bound segment is
None by default. Actually, I need this message to be sent out in any
situations.

I'd appreciate any hints.

Thanks a lot,

-- 
---
cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _notify_port_updated in ML2 plugin doesn't take effect under some conditions

2014-03-17 Thread Li Ma
Misunderstanding. Just find out this message is sent to
notifications.info topic.

Anyway, is there any solution to get port_update_info from l2-agents?

Thanks,
Li Ma

On 3/17/2014 4:11 PM, Li Ma wrote:
 Hi stackers,

 I'm trying to extend the capability of port by propagating
 binding:profile from neutron-server to l2-agents.

 When I issue update-port-api with a new binding:profile, I find that the
 action is not notified to any agents. Then I check the code and find the
 following function:

 def _notify_port_updated(self, mech_context):
 port = mech_context._port
 segment = mech_context.bound_segment
 if not segment:
 # REVISIT(rkukura): This should notify agent to unplug port
 network = mech_context.network.current
 LOG.warning(_(In _notify_port_updated(), no bound segment for 
   port %(port_id)s on network %(network_id)s),
 {'port_id': port['id'],
  'network_id': network['id']})
 return
 self.notifier.port_update(mech_context._plugin_context, port,
   segment[api.NETWORK_TYPE],
   segment[api.SEGMENTATION_ID],
   segment[api.PHYSICAL_NETWORK])

 I'm not sure why it checks bound segment here to prevent sending
 port_update out?
 In my situation, I run a devstack environment and the bound segment is
 None by default. Actually, I need this message to be sent out in any
 situations.

 I'd appreciate any hints.

 Thanks a lot,


-- 

cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] _notify_port_updated in ML2 plugin doesn't take effect under some conditions

2014-03-17 Thread Li Ma
Updated. I commented out the segment checking in _notify_port_updated of
ml2-plugin, and finally I can get port_update message on l2-agents.

Is there any side effect? It is working for me, but I'm not sure it is
the real solution.

thanks,

-- 

cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] icehouse-3 release cross reference is added into www.xrefs.info

2014-03-17 Thread Li Ma
Good job.

On 3/13/2014 1:51 PM, John Smith wrote:
 icehouse-3 release cross reference is added into www.xrefs.info, check
 it out http://www.xrefs.info. Thx. xrefs.info admin

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
---
cheers,
Li Ma



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] how to extend port capability using binding:profile

2014-03-16 Thread Li Ma
Hi all,

I'd like to extend port capability using ML2 binding:profile. I checked
the official docs and it seems that there's no guide for it.

Is there any CLI support for port binding:profile?
Or is there any development guide on how to set up profile?

-- 
---
cheers,
Li Ma




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [stable/havana] cherry backport, multiple external networks, passing tests

2014-02-17 Thread Li Ma
Hi, I tested the code and found it difficult to use. I have a Open
vSwitch+VXLAN neutron environment with multiple external networks. I'd
like to serve all the external network via only one L3-agent.

I applied this patch to l3_agent.py and also modified l3_agent.ini:
external_network_bridge =
gateway_external_network_id =
(left them empty value)

However, it is weird that all the external port (qg-xxx) are not working
properly. They are binded to br-int, not br-ex.

I also noticed something in the patch:
# L3 agent doesn't use external_network_bridge to handle external
# networks, so bridge_mappings with provider networks will be used
# and the L3 agent is able to handle any external networks.

Actually, I don't use bridge_mappings in l2-agent. ASAIK, this option is
related with FLAT or VLAN. I uses tunneling.

I'm not sure whether it can be applied for tunneling network.


On 2/12/2014 5:48 PM, Miguel Angel Ajo Pelayo wrote:
 Could any core developer check/approve this if it does look good?

 https://review.openstack.org/#/c/68601/

 I'd like to get it in for the new stable/havana release 
 if it's possible.


 Best regards,
 Miguel Ángel


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev