Re: [openstack-dev] [Fuel][TripleO] NIC bonding for OpenStack

2014-02-11 Thread Jesse Pretorius
On 11 February 2014 18:42, Andrey Danin  wrote:

> We are working on link aggregation support in Fuel. We wonder what are the
> most desirable types of bonding now in datacenters. We had some issues (see
> below) with OVS bond in LACP mode, and it turned out that standard Linux
> bonding (attached to OVS bridges) was a better option in our setup.
>

As a deployer we've been running bonding through OVS in balance-tcp mode
with lacp_time set to fast in production for around six months without any
problems.

We certainly like the flexibility of using OVS for bonding and haven't seen
any issues like those you've mentioned. Then again, our deployment is
considerably smaller and we're using Arista 10G switches.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-11 Thread trinath.soman...@freescale.com
Hi Gary-

Look into the logs with a search towards your written code keywords..

There might be some issues related to PEP8 or invalid import paths.

Hope this helps.

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

From: Gary Duan [mailto:garyd...@gmail.com]
Sent: Wednesday, February 12, 2014 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: 
process-returncode'

Clark,

You are right. The test must have been bailed out. The question is what I 
should look for. Even a successful case has a lot of Traceback, ERROR log in 
subunit_log.txt.

Thanks,
Gary

On Tue, Feb 11, 2014 at 7:19 PM, Clark Boylan 
mailto:clark.boy...@gmail.com>> wrote:
On Tue, Feb 11, 2014 at 7:05 PM, Gary Duan 
mailto:garyd...@gmail.com>> wrote:
> Hi, Clark,
>
> Thanks for your reply.
>
> I thought the same thing at first, but the page by default only shows the
> failed cases. The other 1284 cases were OK.
>
> Gary
>
>
> On Tue, Feb 11, 2014 at 6:07 PM, Clark Boylan 
> mailto:clark.boy...@gmail.com>>
> wrote:
>>
>> On Tue, Feb 11, 2014 at 5:52 PM, Gary Duan 
>> mailto:garyd...@gmail.com>> wrote:
>> > Hi,
>> >
>> > The patch I submitted for L3 service framework integration fails on
>> > jenkins
>> > test, py26 and py27. The console only gives following error message,
>> >
>> > 2014-02-12 00:45:01.710 | FAIL: process-returncode
>> > 2014-02-12 00:45:01.711 | tags: worker-1
>> >
>> > and at the end,
>> >
>> > 2014-02-12 00:45:01.916 | ERROR: InvocationError:
>> > '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m
>> > neutron.openstack.common.lockutils python setup.py testr --slowest
>> > --testr-args='
>> > 2014-02-12 00:45:01.917 | ___ summary
>> > 
>> > 2014-02-12 00:45:01.918 | ERROR:   py27: commands failed
>> >
>> > I wonder what might be the reason for the failure and how to debug this
>> > problem?
>> >
>> > The patch is at, https://review.openstack.org/#/c/59242/
>> >
>> > The console output is,
>> >
>> > http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html
>> >
>> > Thanks,
>> >
>> > Gary
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> I haven't dug into this too far but
>>
>> http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/testr_results.html.gz
>> seems to offer some clues. Not sure why the console output doesn't
>> show the additional non exit code errors (possibly a nonstandard
>> formatter? or a bug?).
>>
>> Also, cases like this tend to be the test framework completely dying
>> due to a sys.exit somewhere or similar. This kills the tests and runs
>> only a small subset of them which seems to be the case here.
>>
>> Clark
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
I picked a different neutron master change and it ran 10k py27
unittests. Pretty sure the test framework is bailing out early here.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] modify_image_attribute() in ec2_api is broken in Nova

2014-02-11 Thread wu jiang
Hi all,

I met some problems when testing an ec2_api:'modify_image_attribute()' in
Nova.
I found the params send to Nova, are not suitable to match it in AWS api.
I logged it in launchpad: https://bugs.launchpad.net/nova/+bug/1272844

-

1. Here is the definition part of modify_image_attribute():

def modify_image_attribute(
self, context, image_id, attribute, operation_type, **kwargs)

2. And here is the example of it in AWS api:

https://ec2.amazonaws.com/?Action=ModifyImageAttribute&ImageId=ami-61a54008&LaunchPermission.Remove.1.UserId=

-

3. You can see the value isn't suitable to match the defination in Nova
codes.
Therefore, Nova will raise the exception like this:

>TypeError: 'modify_image_attribute() takes exactly 5 non-keyword arguments
(3 given)'

4. I printed out the params send to Nova via eucaTools.
The results also validate the conclusions above:

> args={'launch_permission': {'add': {'1': {'group': u'all'}}}, 'image_id':
u'ami-0004'}

--

So, is this api correct? Should we need to modify it according to the
format of AWS api?


Best Wishes,
wingwj
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-11 Thread Gary Duan
Clark,

You are right. The test must have been bailed out. The question is what I
should look for. Even a successful case has a lot of Traceback, ERROR log
in subunit_log.txt.

Thanks,
Gary


On Tue, Feb 11, 2014 at 7:19 PM, Clark Boylan wrote:

> On Tue, Feb 11, 2014 at 7:05 PM, Gary Duan  wrote:
> > Hi, Clark,
> >
> > Thanks for your reply.
> >
> > I thought the same thing at first, but the page by default only shows the
> > failed cases. The other 1284 cases were OK.
> >
> > Gary
> >
> >
> > On Tue, Feb 11, 2014 at 6:07 PM, Clark Boylan 
> > wrote:
> >>
> >> On Tue, Feb 11, 2014 at 5:52 PM, Gary Duan  wrote:
> >> > Hi,
> >> >
> >> > The patch I submitted for L3 service framework integration fails on
> >> > jenkins
> >> > test, py26 and py27. The console only gives following error message,
> >> >
> >> > 2014-02-12 00:45:01.710 | FAIL: process-returncode
> >> > 2014-02-12 00:45:01.711 | tags: worker-1
> >> >
> >> > and at the end,
> >> >
> >> > 2014-02-12 00:45:01.916 | ERROR: InvocationError:
> >> > '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m
> >> > neutron.openstack.common.lockutils python setup.py testr --slowest
> >> > --testr-args='
> >> > 2014-02-12 00:45:01.917 | ___ summary
> >> > 
> >> > 2014-02-12 00:45:01.918 | ERROR:   py27: commands failed
> >> >
> >> > I wonder what might be the reason for the failure and how to debug
> this
> >> > problem?
> >> >
> >> > The patch is at, https://review.openstack.org/#/c/59242/
> >> >
> >> > The console output is,
> >> >
> >> >
> http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html
> >> >
> >> > Thanks,
> >> >
> >> > Gary
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> I haven't dug into this too far but
> >>
> >>
> http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/testr_results.html.gz
> >> seems to offer some clues. Not sure why the console output doesn't
> >> show the additional non exit code errors (possibly a nonstandard
> >> formatter? or a bug?).
> >>
> >> Also, cases like this tend to be the test framework completely dying
> >> due to a sys.exit somewhere or similar. This kills the tests and runs
> >> only a small subset of them which seems to be the case here.
> >>
> >> Clark
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> I picked a different neutron master change and it ran 10k py27
> unittests. Pretty sure the test framework is bailing out early here.
>
> Clark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-11 Thread Clark Boylan
On Tue, Feb 11, 2014 at 7:05 PM, Gary Duan  wrote:
> Hi, Clark,
>
> Thanks for your reply.
>
> I thought the same thing at first, but the page by default only shows the
> failed cases. The other 1284 cases were OK.
>
> Gary
>
>
> On Tue, Feb 11, 2014 at 6:07 PM, Clark Boylan 
> wrote:
>>
>> On Tue, Feb 11, 2014 at 5:52 PM, Gary Duan  wrote:
>> > Hi,
>> >
>> > The patch I submitted for L3 service framework integration fails on
>> > jenkins
>> > test, py26 and py27. The console only gives following error message,
>> >
>> > 2014-02-12 00:45:01.710 | FAIL: process-returncode
>> > 2014-02-12 00:45:01.711 | tags: worker-1
>> >
>> > and at the end,
>> >
>> > 2014-02-12 00:45:01.916 | ERROR: InvocationError:
>> > '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m
>> > neutron.openstack.common.lockutils python setup.py testr --slowest
>> > --testr-args='
>> > 2014-02-12 00:45:01.917 | ___ summary
>> > 
>> > 2014-02-12 00:45:01.918 | ERROR:   py27: commands failed
>> >
>> > I wonder what might be the reason for the failure and how to debug this
>> > problem?
>> >
>> > The patch is at, https://review.openstack.org/#/c/59242/
>> >
>> > The console output is,
>> >
>> > http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html
>> >
>> > Thanks,
>> >
>> > Gary
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> I haven't dug into this too far but
>>
>> http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/testr_results.html.gz
>> seems to offer some clues. Not sure why the console output doesn't
>> show the additional non exit code errors (possibly a nonstandard
>> formatter? or a bug?).
>>
>> Also, cases like this tend to be the test framework completely dying
>> due to a sys.exit somewhere or similar. This kills the tests and runs
>> only a small subset of them which seems to be the case here.
>>
>> Clark
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I picked a different neutron master change and it ran 10k py27
unittests. Pretty sure the test framework is bailing out early here.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-11 Thread Gary Duan
Hi, Clark,

Thanks for your reply.

I thought the same thing at first, but the page by default only shows the
failed cases. The other 1284 cases were OK.

Gary


On Tue, Feb 11, 2014 at 6:07 PM, Clark Boylan wrote:

> On Tue, Feb 11, 2014 at 5:52 PM, Gary Duan  wrote:
> > Hi,
> >
> > The patch I submitted for L3 service framework integration fails on
> jenkins
> > test, py26 and py27. The console only gives following error message,
> >
> > 2014-02-12 00:45:01.710 | FAIL: process-returncode
> > 2014-02-12 00:45:01.711 | tags: worker-1
> >
> > and at the end,
> >
> > 2014-02-12 00:45:01.916 | ERROR: InvocationError:
> > '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m
> > neutron.openstack.common.lockutils python setup.py testr --slowest
> > --testr-args='
> > 2014-02-12 00:45:01.917 | ___ summary
> > 
> > 2014-02-12 00:45:01.918 | ERROR:   py27: commands failed
> >
> > I wonder what might be the reason for the failure and how to debug this
> > problem?
> >
> > The patch is at, https://review.openstack.org/#/c/59242/
> >
> > The console output is,
> >
> http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html
> >
> > Thanks,
> >
> > Gary
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> I haven't dug into this too far but
>
> http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/testr_results.html.gz
> seems to offer some clues. Not sure why the console output doesn't
> show the additional non exit code errors (possibly a nonstandard
> formatter? or a bug?).
>
> Also, cases like this tend to be the test framework completely dying
> due to a sys.exit somewhere or similar. This kills the tests and runs
> only a small subset of them which seems to be the case here.
>
> Clark
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

2014-02-11 Thread Alexander Tivelkov
Hi Joshua,

Currently we are not 3.x compatible, at least not all of the modules.
Migration to python3 should eventually be done, but for now it is the least
of our concerns, while relying on a non-standard messaging solution instead
of community-approved one is likely to cause problems, both in terms of
code stability and formal incubation-related requirements.

--
Regards,
Alexander Tivelkov


On Tue, Feb 11, 2014 at 6:52 PM, Joshua Harlow wrote:

>  Is murano python3.x compatible, from what I understand oslo.messaging
> isn't (yet). If murano is supporting python3.x then brining in
> oslo.messaging might make it hard for murano to be 3.x compatible. Maybe
> not a problem (I'm not sure of muranos python version support).
>
>   From: Serg Melikyan 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, February 11, 2014 at 5:05 AM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [Murano] Can we migrate to oslo.messaging?
>
>oslo.messaging  is a
> library that provides RPC and Notifications API, they are part of the same
> library for mostly historical reasons. One of the major goals of
> *oslo.messaging* is to provide clean RPC and Notification API without any
> trace of messaging queue concepts (but two of most advanced drivers used by
> oslo.messaging is actually based on AMQP: RabbitMQ and QPID).
>
>  We were designing Murano on messaging queue concepts using some
> AMQP/RabbitMQ specific features, like queue TTL. Since we never considered
> communications between our components in terms of RPC or Notifications and
> always thought about them as message exchange through broker it has
> influenced our components architecture. In Murano we use simple 
> wrapper
>  around Puka  (RabbitMQ client with most
> simple and thoughtful async model) that is used in all our components. We 
> forked
> Puka  since we had specific
> requirements to SSL and could not yet merge our 
> work back
> to master.
>
>  Can we abandon our own 
> wrapperaround
>  our own
> fork of Puka  in favor of
> *oslo.messaging*? * Yes*, but this migration may be tricky. I believe we
> can migrate to *oslo.messaging* in a week or so*.*
>
>  I had played with *oslo.messaging* emulating our current communication
> patterns with *oslo.messaging*, and I am certain that current
> implementation can be migrated to *oslo.messaging**. *But I am not sure
> that *oslo.messaging* may be easily suited to all future use-cases that
> we plan to cover in a few next releases without major contributions.
> Please, try to respond with any questions related to *oslo.messaging* 
> implementation
> and how it can be fitted with certain use-case.
>
>  Below, I tried to describe our current use-cases and what specific MQ
> features we are using, how they may be implemented with *oslo.messaging *and
> with what limitations we will face.
>
>  Use-Case
> Murano has several components with communications between them based on
> messaging queue:
>  *murano-api* -> *murano-conductor:*
>
>1. *murano-api* sends deployment tasks to murano-conductor
>
>  *murano-conductor* -> *murano-api:*
>
>1. *murano-conductor* reports to *murano-api* task progress
>during processing
>2. after processing, *murano-conductor* sends results to *murano-api*
>
>  *murano-conductor *->* murano-agent:*
>
>1. during task processing *murano-conductor* sends execution plans
>with commands to *murano-agent.*
>
> Note: each of mentioned components above may have more than one instance.
>
>  One of great messaging queue specific that we heavily use is a idea of
> queue itself, messages sent to component will be handled any time soon as
> at least one instance would be started. For example, in case of
> *murano-agent*, message is sent even before *murano-agent* is started.
> Another one is queue life-time, we control life-time of *murano-agent*queues 
> to exclude overflow of MQ server with queues that is not used
> anymore.
>
>  One thing is also worse to mention: *murano-conductor* communicates with
> several components at the same time: process several tasks at the same
> time, during task processing *murano-conductor* sends progress
> notifications to *murano-api* and execution plans to *murano-agent*.
>
>  Implementation
> Please, refer to 
> Concepts
>  section of *oslo.messaging* Wiki before further reading to grasp key
> concepts expressed in *oslo.messaging* library. In short, using RPC API
> we can 'call' server synchronously and receive some result, or 'cast'
> asynchronousl

Re: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

2014-02-11 Thread Joshua Harlow
Is murano python3.x compatible, from what I understand oslo.messaging isn't 
(yet). If murano is supporting python3.x then brining in oslo.messaging might 
make it hard for murano to be 3.x compatible. Maybe not a problem (I'm not sure 
of muranos python version support).

From: Serg Melikyan mailto:smelik...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, February 11, 2014 at 5:05 AM
To: OpenStack Development Mailing List 
mailto:OpenStack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

oslo.messaging is a library that 
provides RPC and Notifications API, they are part of the same library for 
mostly historical reasons. One of the major goals of oslo.messaging is to 
provide clean RPC and Notification API without any trace of messaging queue 
concepts (but two of most advanced drivers used by oslo.messaging is actually 
based on AMQP: RabbitMQ and QPID).

We were designing Murano on messaging queue concepts using some AMQP/RabbitMQ 
specific features, like queue TTL. Since we never considered communications 
between our components in terms of RPC or Notifications and always thought 
about them as message exchange through broker it has influenced our components 
architecture. In Murano we use simple 
wrapper
 around Puka (RabbitMQ client with most simple 
and thoughtful async model) that is used in all our components. We forked 
Puka since we had specific requirements to 
SSL and could not yet merge our work 
back to master.

Can we abandon our own 
wrapper
 around our own fork of Puka in favor of 
oslo.messaging? Yes, but this migration may be tricky. I believe we can migrate 
to oslo.messaging in a week or so.

I had played with oslo.messaging emulating our current communication patterns 
with oslo.messaging, and I am certain that current implementation can be 
migrated to oslo.messaging. But I am not sure that oslo.messaging may be easily 
suited to all future use-cases that we plan to cover in a few next releases 
without major contributions. Please, try to respond with any questions related 
to oslo.messaging implementation and how it can be fitted with certain use-case.

Below, I tried to describe our current use-cases and what specific MQ features 
we are using, how they may be implemented with oslo.messaging and with what 
limitations we will face.

Use-Case
Murano has several components with communications between them based on 
messaging queue:
murano-api -> murano-conductor:

  1.  murano-api sends deployment tasks to murano-conductor

murano-conductor -> murano-api:

  1.  murano-conductor reports to murano-api task progress during processing
  2.  after processing, murano-conductor sends results to murano-api

murano-conductor -> murano-agent:

  1.  during task processing murano-conductor sends execution plans with 
commands to murano-agent.

Note: each of mentioned components above may have more than one instance.

One of great messaging queue specific that we heavily use is a idea of queue 
itself, messages sent to component will be handled any time soon as at least 
one instance would be started. For example, in case of murano-agent, message is 
sent even before murano-agent is started. Another one is queue life-time, we 
control life-time of murano-agent queues to exclude overflow of MQ server with 
queues that is not used anymore.

One thing is also worse to mention: murano-conductor communicates with several 
components at the same time: process several tasks at the same time, during 
task processing murano-conductor sends progress notifications to murano-api and 
execution plans to murano-agent.

Implementation
Please, refer to 
Concepts section of 
oslo.messaging Wiki before further reading to grasp key concepts expressed in 
oslo.messaging library. In short, using RPC API we can 'call' server 
synchronously and receive some result, or 'cast' asynchronously (no result is 
returned). Using Notification API we can send Notification to the specified 
Target about happened event with specified event_type, importance and payload.

If we move to oslo.messaging we can only primarily rely on features provided by 
RPC/Notifications model:

  1.  We should not rely on message delivery without other side is properly up 
and running. It is not a message delivery, it is Remote Procedure Call;
  2.  To control queue life-time as we do now, we may be required to 'hack' 
oslo.messaging by writing own driver.

murano-api -> murano-conductor:

  1.  murano-api sends

Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-11 Thread Clark Boylan
On Tue, Feb 11, 2014 at 5:52 PM, Gary Duan  wrote:
> Hi,
>
> The patch I submitted for L3 service framework integration fails on jenkins
> test, py26 and py27. The console only gives following error message,
>
> 2014-02-12 00:45:01.710 | FAIL: process-returncode
> 2014-02-12 00:45:01.711 | tags: worker-1
>
> and at the end,
>
> 2014-02-12 00:45:01.916 | ERROR: InvocationError:
> '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m
> neutron.openstack.common.lockutils python setup.py testr --slowest
> --testr-args='
> 2014-02-12 00:45:01.917 | ___ summary
> 
> 2014-02-12 00:45:01.918 | ERROR:   py27: commands failed
>
> I wonder what might be the reason for the failure and how to debug this
> problem?
>
> The patch is at, https://review.openstack.org/#/c/59242/
>
> The console output is,
> http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html
>
> Thanks,
>
> Gary
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

I haven't dug into this too far but
http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/testr_results.html.gz
seems to offer some clues. Not sure why the console output doesn't
show the additional non exit code errors (possibly a nonstandard
formatter? or a bug?).

Also, cases like this tend to be the test framework completely dying
due to a sys.exit somewhere or similar. This kills the tests and runs
only a small subset of them which seems to be the case here.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-11 Thread Gary Duan
Hi,

The patch I submitted for L3 service framework integration fails on jenkins
test, py26 and py27. The console only gives following error message,

2014-02-12 00:45:01.710 | FAIL: process-returncode
2014-02-12 00:45:01.711 | tags: worker-1

and at the end,

2014-02-12 00:45:01.916 | ERROR: InvocationError:
'/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python -m
neutron.openstack.common.lockutils python setup.py testr --slowest
--testr-args='
2014-02-12 00:45:01.917 | ___ summary

2014-02-12 00:45:01.918 | ERROR:   py27: commands failed

I wonder what might be the reason for the failure and how to debug this problem?

The patch is at, https://review.openstack.org/#/c/59242/

The console output is,
http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html

Thanks,

Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance]supporting of v1 and v2 glance APIs in Nova

2014-02-11 Thread Lingxian Kong
2014-02-12 0:38 GMT+08:00 Eddie Sheffield :

> > A few days ago, I met some problems when using 'createimage' feature in
> > Nova, we found that using V1 of glanceclient has some problem with
> > processing of metadata, the version number and even the glance URIs are
> > both hardcoded in Nova.
> >
> > then, we found the bluepring[1] proposed, and the maillist[2] which
> talked
> > about the topic before, mainly focused on version autodiscovery by
> keystone
> > catalog and config option for nova. But we still need changes in Nova
> > because the incompatible behavior between v1 and v2, especially when
> > creating and uploading an image file. The review request[3] of the bp is
> > abandoned for now.
> >
> > So, what I want to confirm is, how could this situation be handled? I
> > mailed Eddie Sheffield, but got no answer, so bring it up here.
> >
> > [1]: https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api
> > [2]: http://markmail.org/message/uqrpufsmh4qp5pgy
> > [4]: https://review.openstack.org/#/c/38414/
>
> Hi Lingxian,
>
> I'm afraid I somehow didn't see the email you sent to me directly. We
> recently held the Glance Mini-summit and this work was discussed. Andrew
> Laski from the Nova team was also in attendance and provided some input
> from their perspective. Essentially we decided that while autodiscovery is
> desirable, we want to roll that functionality into a much-improved
> python-glanceclient which will present a version-agnostic programming api
> to users of the library. So the immediate plan is to go back to the
> approach outlined in the bp and merge prop you reference above. Then in the
> near future produce the new glanceclient followed by updating Nova to use
> the new library which will address the concerns of autodiscovery among
> other things.
>
> Timeline-wise, I've been a bit covered up with some other work but will be
> getting back to this within a week. There were some concerns about the size
> of the patch so rather than unabandoning the existing one I will be trying
> to put up multiple, smaller patches.
>
> Please let me know if you have any specific concerns or requirements so
> they can be addressed.
>
> 
> Eddie Sheffield
> Rackspace Hosting, Inc.
> eddie.sheffi...@rackspace.com
>
>

Hi Eddie, thanks for your prompt reply and the information you provided.

Could you add me in the review list if you have submmited new patches?
Thanks again!

-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Support for multiple provider networks with same VLAN segmentation id

2014-02-11 Thread Vinay Bannai
One way to look at the VLANs is that it identifies a security zone that
meets some regulatory compliance standards. The VLANs on each rack are
separated from other VLANs by using VRF. Tenants in our case map to
applications. So each application is served by a VIP with a pool of VMs. So
all applications needing regulatory compliance would map to this VLAN. Make
sense?

As far your question about keeping mac_addresses unique, isn't that
achieved by the fact that each VM will have a unique mac since it is the
same control plane allocating the mac address? Or did I miss something??

Vinay


On Tue, Feb 11, 2014 at 12:32 PM, Aaron Rosen  wrote:

> I believe it would need to be like:
>
>  network_vlan_ranges = physnet1:100:300, phynet2:100:300, phynet3:100:300
>
> Additional comments inline:
>
> On Mon, Feb 10, 2014 at 8:49 PM, Vinay Bannai  wrote:
>
>> Bob and Kyle,
>>
>> Thanks for your review.
>> We looked at this option and it seems it might meet our needs. Here is
>> what we intend to do:
>>
>> Let's say we have three racks (each rack supports three VLANs - 100, 200
>> and 300).
>> We create the following config file for the neutron server
>>
>>
>>
>>
>> tenant_network_type = vlan
>>  network_vlan_ranges = physnet1:100:300
>>  network_vlan_ranges = phynet2:100:300
>>  network_vlan_ranges = phynet3:100:300
>>  integration_bridge = br-int
>>  bridge_mappings = physnet1:br-eth1, physnet2:br-eth1, physnet3:br-eth1
>> Is this what you meant?
>>
>> Vinay
>>
>>
>> On Sun, Feb 9, 2014 at 6:03 PM, Robert Kukura  wrote:
>>
>>> On 02/09/2014 12:56 PM, Kyle Mestery wrote:
>>> > On Feb 6, 2014, at 5:24 PM, Vinay Bannai  wrote:
>>> >
>>> >> Hello Folks,
>>> >>
>>> >> We are running into a situation where we are not able to create
>>> multiple provider networks with the same VLAN id. We would like to propose
>>> a solution to remove this restriction through a configuration option. This
>>> approach would not conflict with the present behavior where it is not
>>> possible to create multiple provider networks with the same VLAN id.
>>> >>
>>> >> The changes should be minimal and would like to propose it for the
>>> next summit. The use case for this need is documented in the blueprint
>>> specification.
>>> >> Any feedback or comments are welcome.
>>> >>
>>> >>
>>> https://blueprints.launchpad.net/neutron/+spec/duplicate-providernet-vlans
>>> >>
>>> > Hi Vinay:
>>> >
>>> > This problem seems straightforward enough, though currently you are
>>> right
>>> > in that we don't allow multiple Neutron networks to have the same
>>> segmentation
>>> > ID. I've added myself as approver for this BP and look forward to
>>> further
>>> > discussions of this before and during the upcoming Summit!
>>>
>>>
> I kind of feel like allowing a vlan to span multiple networks is kind of
> wonky. I feel like a better abstraction would be if we had better access
> control over shared networks between tenants. This way we could explicitly
> allow two tenants to share a network. Is this the problem you are trying to
> solve though doing it with the same vlan?  How do you plan on enforcing
> that mac_addresses are unique on the same physical network?
>
>  Multiple networks with network_type of 'vlan' are already allowed to
>>> have the same segmentation ID with the ml2, openvswitch, or linuxbridge
>>> plugins - the networks just need to have different physical_network
>>> names.
>>
>>
> This is the same for the NSX plugin as well.
>
>
>>  If they have the same network_type, physical_network, and
>>> segmentation_id, they are the same network. What else would distinguish
>>> them from each other?
>>>
>>> Could your use case be addressed by simply using different
>>> physical_network names for each rack? This would provide independent
>>> spaces of segmentation_ids for each.
>>>
>>> -Bob
>>>
>>> >
>>> > Thanks!
>>> > Kyle
>>> >
>>> >> Thanks
>>> >> --
>>> >> Vinay Bannai
>>> >> Email: vban...@gmail.com
>>> >> Google Voice: 415 938 7576
>>> >> ___
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev@lists.openstack.org
>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Vinay Bannai
>> Email: vban...@gmail.com
>> Google Voice: 415 938 7576
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lis

Re: [openstack-dev] [Fuel][TripleO] NIC bonding for OpenStack

2014-02-11 Thread Robert Collins
On 12 February 2014 05:42, Andrey Danin  wrote:
> Hi Openstackers,
>
>
> We are working on link aggregation support in Fuel. We wonder what are the
> most desirable types of bonding now in datacenters. We had some issues (see
> below) with OVS bond in LACP mode, and it turned out that standard Linux
> bonding (attached to OVS bridges) was a better option in our setup.

OVS implements SLB bonding as well as LACP, so we shouldn't need
standard linux bonding at all - I'd rather keep things simple if
possible - having more moving parts than we need is a problem :).

> I want to hear your opinion, guys. What types of bonding do you think are
> better now in terms of stability and performance, so that we can properly
> support them for OpenStack installations.

We'll depend heavily on operator feedback here. - Jay has forwarded
this to the operators list, so lets see what they say.

> Also, we are wondering if there any plans to support bonding in TripleO, and
> how you guys would like to see it be implemented? What is the general
> approach for such complex network configurations for TripleO? We would love
> to extract this piece from Fuel and make it fully independent, so that the
> larger community can use it and we could work collaboratively on it. Right
> now it is actually already granular and can be reused in other projects, and
> implemented as a separated puppet module:
> https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/l23network.

Yes, we'd like to support bonding.

I think we need this modelled in Neutron to do it properly, though I'm
just drafting up a schema for us to model this manually in heat in the
interim (will be at
https://etherpad.openstack.org/p/tripleo-network-configuration
shortly).

Ideally:
 - we use LACP active mode on all ports
 - in-instance logic configures bonding when the same switch is
plugged into two ethernet ports
 - nova's BM driver puts all NIC ports on the same l2 network, if
requested by the user.

However, we don't really want one IP per port when bonding. That will
make DHCP hard - we'd have to have
vswitch
  port nic0
  port nic1
   ..
  port nicN
and then

ovs-vsctl add-port vswitch dhcp1
ovs-vsctl set port dchp1
ovs-vsctl set interface dhcp1 type=internal
ovs-vsctl set interface mac="$macofnic1"

...

ovs-vsctl add-port vswitch dhcpN
ovs-vsctl set port dchpN
ovs-vsctl set interface dhcpNM type=internal
ovs-vsctl set interface mac="$macofnic0N"

As the virtual switch only has one MAC - the lowest by default IIRC -
and so we'd lose the excess IP's and then become unreachable unless
other folk reimplement the heuristic the bridge has to pick the right
ip. Ugh.

So my long term plan is:
Ironic knows about the NICs
nova boot specifies which NICs are bonded (1)
Neutron gets one port for each bonding group, with one ip, and *all*
the possible MACs - so it can answer DHCP for whichever one the server
DHCPs from
We include all the MACs in a vendor DHCP option, and then the server
in-instance logic can build a bridge from that + explicit in-instance
modelling we might do for e.g. heat.

(1): because in a SDN world we might bond things differently on
different deploys

> Description of the problem with LACP we ran into:
>
> https://etherpad.openstack.org/p/LACP_issue

Yeah, matching the configuration is important :). The port overload is
particularly interesting - scalablility issues galore.

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.config improvements

2014-02-11 Thread Doug Hellmann
On Tue, Feb 11, 2014 at 4:38 PM, Shaun McCance  wrote:

> On Mon, 2014-02-10 at 10:50 -0500, Doug Hellmann wrote:
> >
> > 2) It locates options based on them being defined at the top
> > level of a
> > module (not in function or packed into some object), but it
> > gets info by
> > looking at cfg.CONF, which require registration. This fails if
> > an option
> > is defined by only registered when some function is called.
> > This happens
> > right now with the ip_lib_force_root option in Neutron. Both
> > the config
> > file generator and config docs generator currently crash on
> > Neutron
> > because of this.
> >
> >
> > The option definition needs to be in a variable at the module top
> > level, even if the registration call happens at run time. Can you file
> > a bug against Neutron about that?
>
> The option *is* currently defined at the module top level, though it's
> only registered when a function is called. _guess_groups in generator.py
> raises an error, because it looks for all instantiated options in the
> cfg.CONF object, and it's not there.
>

Ah, OK, I think we should consider that a bug in the generator.



>
>
> > 3) It's completely incapable of finding options that aren't
> > defined or
> > registered until some function is called or some object is
> > instantiated.
> > I don't have a general solution to this, although I do have
> > special-case
> > code that detects projects that use oslo.messaging and ensures
> > those
> > options get registered.
> >
> >
> > I did some work to address this under
> >
> https://blueprints.launchpad.net/oslo/+spec/improve-config-discovery-for-docsso
>  that libraries can register entry points that declare configuration
> options. I think the only library using this feature so far is
> oslo.messaging, but the other oslo libraries will use it when they graduate
> from the incubator.
>
> Oh, thanks. That's much cleaner than my workaround for oslo.messaging:
>
> https://review.openstack.org/#/c/68196/
>
>
>
> > To address these issues, I'd like to get oslo.config to know
> > about the
> > defining module for each option. It can get that using
> > something like
> > this in Opt.__init__:
> >
> > for frame in inspect.stack():
> > mod = inspect.getmodule(frame[0])
> > if mod == sys.modules[__name__]:
> > continue
> > self.module = mod.__name__
> > break
> >
> >
> > I'm not sure we want deployers to have to worry about where the option
> > is defined. How would you use the information in the documentation? I
> > guess we include it in the sample config output?
>
> I agree that deployers shouldn't have to care about the defining module.
> The only reason I've proposed it is that the entire architecture of the
> sample config generator is geared around poking into modules looking for
> options, because it wants to group options by module. This one addition
> to Opt.__init__ would make things simpler and faster for the sample
> config generator.
>
> As for the docs, I'm not sure if we want that info in there or not. I
> wouldn't push hard for it if it's not already there. Whether we'd bother
> with it if it is available is something I'd have to ask the rest of the
> docs team.
>
> >
> > We'd also have to modify __eq__ and __ne__, because it's valid
> > for an
> > option to be defined in two different modules as long as all
> > its
> > parameters are the same. Checking vars() equality would trip
> > up on the
> > module. (There's the further issue that the defining module is
> > non-deterministic in those cases.)
> >
> >
> > It's valid for an option to be registered with the same definition
> > more than once, but having the definition live in more than one place
> > is just asking for maintenance trouble later. Do we actually have this
> > situation now?
>
> auth_strategy is defined in both neutron/common/config.py and
> neutron/agent/linux/interface.py. It would also be defined in
> neutron/agent/metadata/agent.py, except it's defined in a class variable
> and only registered by calling main().
>
> >
> >
> > Then I'd like to get both the config file generator and the
> > config docs
> > generator using something like the OptionsCache class I wrote
> > for the
> > config docs generator:
> >
> >
> http://git.openstack.org/cgit/openstack/openstack-doc-tools/tree/autogenerate_config_docs/common.py#n88
> >
> > This could be simpler and faster and avoid the crash mentioned
> > above
> > with the module info attached to each option.
> >
> >
> > Having 2 programs that discover options in 2 ways is a bad thing, so
> > yes, let's combine them in oslo.config and let

Re: [openstack-dev] [oslo-notify] notifications consumed by multiple subscribers

2014-02-11 Thread Sandy Walsh
The notification system can specify multiple queues to publish to, so each of 
your dependent services can feed from a separate queue. 

However, this is a critical bug in oslo.messaging that has broken this feature. 
https://bugs.launchpad.net/nova/+bug/1277204

Hopefully it'll get fixed quickly and you'll be able to do what you need.

There were plans for a notification consumer in oslo.messaging, but I don't 
know where it stands. I'm working on a standalone notification consumer library 
for rabbit. 

-S


From: Sanchez, Cristian A [cristian.a.sanc...@intel.com]
Sent: Tuesday, February 11, 2014 2:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [oslo-notify] notifications consumed by multiple 
subscribers

Hi,
I’m planning to use oslo.notify mechanisms to implement a climate blueprint: 
https://blueprints.launchpad.net/climate/+spec/notifications. Ideally, the 
notifications sent by climate should be received by multiple services 
subscribed to the same topic. Is that possible with oslo.notify? And moreover, 
is there any mechanism for removing items from the queue? Or should one 
subscriber be responsible for removing items from it?

Thanks

Cristian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.config improvements

2014-02-11 Thread Shaun McCance
On Mon, 2014-02-10 at 10:50 -0500, Doug Hellmann wrote:
> 
> 2) It locates options based on them being defined at the top
> level of a
> module (not in function or packed into some object), but it
> gets info by
> looking at cfg.CONF, which require registration. This fails if
> an option
> is defined by only registered when some function is called.
> This happens
> right now with the ip_lib_force_root option in Neutron. Both
> the config
> file generator and config docs generator currently crash on
> Neutron
> because of this.
> 
> 
> The option definition needs to be in a variable at the module top
> level, even if the registration call happens at run time. Can you file
> a bug against Neutron about that?

The option *is* currently defined at the module top level, though it's
only registered when a function is called. _guess_groups in generator.py
raises an error, because it looks for all instantiated options in the
cfg.CONF object, and it's not there.


> 3) It's completely incapable of finding options that aren't
> defined or
> registered until some function is called or some object is
> instantiated.
> I don't have a general solution to this, although I do have
> special-case
> code that detects projects that use oslo.messaging and ensures
> those
> options get registered.
> 
> 
> I did some work to address this under
> https://blueprints.launchpad.net/oslo/+spec/improve-config-discovery-for-docs 
> so that libraries can register entry points that declare configuration 
> options. I think the only library using this feature so far is 
> oslo.messaging, but the other oslo libraries will use it when they graduate 
> from the incubator.

Oh, thanks. That's much cleaner than my workaround for oslo.messaging:

https://review.openstack.org/#/c/68196/



> To address these issues, I'd like to get oslo.config to know
> about the
> defining module for each option. It can get that using
> something like
> this in Opt.__init__:
> 
> for frame in inspect.stack():
> mod = inspect.getmodule(frame[0])
> if mod == sys.modules[__name__]:
> continue
> self.module = mod.__name__
> break
> 
> 
> I'm not sure we want deployers to have to worry about where the option
> is defined. How would you use the information in the documentation? I
> guess we include it in the sample config output?

I agree that deployers shouldn't have to care about the defining module.
The only reason I've proposed it is that the entire architecture of the
sample config generator is geared around poking into modules looking for
options, because it wants to group options by module. This one addition
to Opt.__init__ would make things simpler and faster for the sample
config generator.

As for the docs, I'm not sure if we want that info in there or not. I
wouldn't push hard for it if it's not already there. Whether we'd bother
with it if it is available is something I'd have to ask the rest of the
docs team.

> 
> We'd also have to modify __eq__ and __ne__, because it's valid
> for an
> option to be defined in two different modules as long as all
> its
> parameters are the same. Checking vars() equality would trip
> up on the
> module. (There's the further issue that the defining module is
> non-deterministic in those cases.)
> 
> 
> It's valid for an option to be registered with the same definition
> more than once, but having the definition live in more than one place
> is just asking for maintenance trouble later. Do we actually have this
> situation now?

auth_strategy is defined in both neutron/common/config.py and
neutron/agent/linux/interface.py. It would also be defined in
neutron/agent/metadata/agent.py, except it's defined in a class variable
and only registered by calling main().

>  
> 
> Then I'd like to get both the config file generator and the
> config docs
> generator using something like the OptionsCache class I wrote
> for the
> config docs generator:
> 
> 
> http://git.openstack.org/cgit/openstack/openstack-doc-tools/tree/autogenerate_config_docs/common.py#n88
> 
> This could be simpler and faster and avoid the crash mentioned
> above
> with the module info attached to each option.
> 
> 
> Having 2 programs that discover options in 2 ways is a bad thing, so
> yes, let's combine them in oslo.config and let them share code.

Great. So I think there's three things we'll have to figure out:

1) What's the best way to group things by module? The answer might just
be "It's not really worth grouping things by module."

2) Is it sufficient to just ask cfg.CONF for options, 

[openstack-dev] [requirements] problems with non overlapping requirements changes

2014-02-11 Thread Sean Dague
A few weeks ago we realized one of the wrecking balls in the gate were
non overlapping requirements changes, like this -
https://review.openstack.org/#/c/72475/

Regular jobs in the gate have to use the OpenStack mirror. Requirements
repo doesn't, because it needs to be able to test things not in the mirror.

So when a requirements job goes into the gate, everything behind it will
be using the new requirements. But the mirror isn't updated until the
requirements change merges.

So if you make a non overlapping change like that, for 1hr (or more)
everything in the wake of the requirements job gets blown up in global
requirements because it can't install that from the mirror.

This issue is partially synthetic, however it does raise a good issue
for continuous deployed environments, because assuming atomic upgrade of
2 code bases isn't a good assumption.

Anyway, the point of this email is we really shouldn't be approving
requirements changes that are disjoint upgrades like that, because they
basically mean they'll trigger 10 - 20 -2s of other people's patches in
the gate.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Replication Contract Verbiage

2014-02-11 Thread Michael Basnight
Daniel Salinas  writes:

> https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API#REPLICATION
>
> I have updated the wiki page to reflect the current proposal for
> replication verbiage with some explanation of the choices.  I would like to
> open discussion here regarding that verbiage.  Without completely
> duplicating everything I just wrote in the wiki here are the proposed words
> that could be used to describe replication between two datastore instances
> of the same type.  Please take a moment to consider them and let me know
> what you think.  I welcome all feedback.
>
> replicates_from:  This term will be used in an instance that is a slave of
> another instance. It is a clear indicator that it is a slave of another
> instance.
>
> replicates_to: This term will be used in an instance that has slaves of
> itself. It is a clear indicator that it is a master of one or more
> instances.

Nice work daniel. I think these are quite sane. They are pretty agnostic
to the datastore type. The only thing i remember Stewart Smith saying
was that these may not _both_ _always_ apply to all datastores. So
assuming we have a builtin way to say that a given datastore/replication
type may not support both of these (or may not have a need to expose it
like this).

> writable: This term will be used in an instance to indicate whether it is
> intended to be used for writes. As replication is used commonly to scale
> read operations it is very common to have a read-only slave in many
> datastore types. It is beneficial to the user to be able to see this
> information when viewing the instance details via the api.

Sounds reasonable. But how do we view a multi-tier slave? aka, a slave
to a slave. Is it both read only and writale, so to speak, depending on
where you are in the cluster hierarchy?

> The intention here is to:
> 1.  have a clearly defined replication contract between instances.
> 2.  allow users to create a topology map simply by querying the api for
> details of instances linked in the replication contracts
> 3.  allow the greatest level of flexibility for users when replicating
> their data so that Trove doesn't prescribe how they should make use of
> replication.
>
> I also think there is value in documenting common replication topologies
> per datastore type with example replication contracts and/or steps to
> recreate them in our api documentation.  There are currently no examples of
> this yet

++

> e.g. To create multi-master replication in mysql...
>
> As previously stated I welcome all feedback and would love input.
>
> Regards,
>
> Daniel Salinas
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgpSmTHKNCxrc.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] First meeting scheduled

2014-02-11 Thread Jesse Noller

On Feb 11, 2014, at 3:02 PM, Boris Pavlovic  wrote:

> Jesse,
> 
> Why not next approach:
> 1) Switch all clients to apiclient[1] (internal unification)

This was discussed several weeks ago here:

http://lists.openstack.org/pipermail/openstack-dev/2014-January/024441.html

This discussion covered everything you mention and more. At that time, it was 
pointed out the blueprint that the internal unification work was being checked 
in against lacked sufficient design, thought towards the end-user (e.g. 
developers). I have spoken to *a lot* of people - across multiple projects and 
teams (and companies) who concur that wrapping the existing clients is 
insufficient and actively *harmful* to the experience of end-users and 
consumers of openstack clouds. More below:

> 2) Unify API of all clients (external unification)
> 3) Unify work with keystone
> 4) Graduate apiclient
> 5) Switch all clients to apiclient lib
> 6) Make one simple plugable mechanism. e.g. subclass based factory + steavdore
> 7) Add one by one subclasse that present client in this factory 
> 8) In one pretty day stop gates & switch to unified client.
> 
> This is actually step by step solution that works for community and could be 
> done independently by tons of developers.
> 

When this last came up, I was Linus’ed:

http://lists.openstack.org/pipermail/openstack-dev/2014-January/024583.html

And not everyone in openstack core is aligned with the common-client-library-2 
work:
https://review.openstack.org/#/q/topic:bp/common-client-library-2,n,z

And it is generally agreed that although “big up-front design” can be a drag, 
the current blueprints and work is confusing and a project like this actually 
requires a deal of design of front to ensure that the API contracts and 
interfaces we expose to users is consistent, usable and not entirely optimized 
for “developers who work on openstack”.

common-client-library-2: 
https://blueprints.launchpad.net/oslo/+spec/common-client-library-2

The blueprint is not a design, or a justification - it’s a todo list and while 
I think code cleanup and reuse is generally a Good Thing, I don’t think 
imposing a series of changes without some coordination and planning across the 
-dev group is going to end up with a good, consistent *end-user* experience. In 
fact, I think the goals are orthogonal - the goals of this BP work seems to be 
to reduce duplicated code; this is an optimization for the OpenStack project.

The python-openstacksdk and python-openstackclient are aimed at a fundamentally 
different (but overlapping set sometimes) audience: developers building 
applications that target openstack deployments, and other end-users. See the 
audience section here:

• Application Developers: Application developers are not OpenStack 
Operators or Developers. These are developers looking to consume a feature-rich 
OpenStack Cloud with its many services. These Developers require a consistent, 
single namespace API ("Application Programming Interface") that allows them to 
build and deploy their application with minimal dependencies.

In a perfect scenario: python-openstackclient’s 1 dependency becomes 
python-openstacksdk which in turn has a dependency list of 1 or two fundamental 
libraries (e.g. requests). 

I did propose in the original thread that we join efforts: in fact, we already 
have a fully functioning, unified SDK that *could* be checked in today - but 
without discussing the APIs, design and other items with the community at 
large, I don’t think that that would be successful.


> 
> [1] 
> https://github.com/openstack/oslo-incubator/tree/master/openstack/common/apiclient
> 
> 
> Best regards, 
> Boris Pavlovic 
> 
> 
> On Wed, Feb 12, 2014 at 12:40 AM, Jesse Noller  
> wrote:
> As I said last week; we’re ready to kickoff and have regular meetings for the 
> “unified python SDK” project. The initial meeting is scheduled on the wiki:
> 
> https://wiki.openstack.org/wiki/Meetings#python-openstacksdk_Meeting
> 
> Date/Time: Feb. 19th - 19:00 UTC / 1pm CST
> 
> IRC channel: #openstack-meeting-3
> 
> Meeting Agenda:
> https://wiki.openstack.org/wiki/Meetings/PythonOpenStackSDK
> 
> About the project:
> https://wiki.openstack.org/wiki/PythonOpenStackSDK
> 
> If you have questions, all of us lurk in #openstack-sdks on freenode!
> 
> See you then.
> 
> Jesse
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Backup/Restore encryption/decryption issue

2014-02-11 Thread Denis Makogon
As we decided at meeting, we wouldn't keep our own implementations of
security stuff, we'll use Barbican as single entry point of delivering
secrets.
I hadn't talked with Barbican team, but since oslo-incubator will (someday)
release oslo.crypto lib for all projects, i think that adding
implementation of new RFC to crypto is a good idea, it would be easy to
re-use it in barbican later and then i will use barbican functionality in
trove for security improvement.

Best regards,

Denis Makogon

Mirantis, Inc.

Kharkov, Ukraine

www.mirantis.com

www.mirantis.ru

dmako...@mirantis.com


2014-02-11 22:58 GMT+02:00 Michael Basnight :

> Denis Makogon  writes:
>
> > Goodday, OpenStack DВaaS community.
> >
> >
> > I'd like to start conversation about guestagent security issue
> related
> > to backup/restore process. Trove guestagent service uses AES with 256 bit
> > key (in CBC mode) [1] to encrypt backups which are stored at predefined
> > Swift container.
> >
> > As you can see, password is defined in config file [2]. And here
> comes
> > problem, this password is used for all tenants/projects that use Trove -
> it
> > is a security issue. I would like to suggest Key derivation function [3]
> > based on static attributes specific for each tenant/project (tenant_id).
> > KDF would be based upon python implementation of PBKDF2 [4].
> Implementation
> > can be seen here [5].
>
> I do not want to see us writing our own crypto code in Trove. Id much
> rather us use barbican for this, assuming it fits the bill. Lets do some
> research on barbican before we go write this all.
>
> >
> > Also i'm looking forward to give user an ability to pass password for
> > KDF that would deliver key for backup/restore encryption/decryption, if
> > ingress password (from user) will be empty, guest will use static
> > attributes of tenant (tenant_id).
> >
> > To allow backward compatibility, python-troveclient should be able to
> pass
> > old password [1] to guestagent as one of parameters on restore call.
> >
> > Blueprint already have been registered in Trove launchpad space, [6].
> >
> > I also foresee porting this feature to oslo-crypt, as part of security
> > framework (oslo.crypto) extensions.
>
> Again, id rather see us use barbican for this instead of creating
> oslo-crypt.
>
> >
> > Thoughts ?
> >
> > [1]
> >
> https://github.com/openstack/trove/blob/master/trove/guestagent/strategies/backup/base.py#L113-L116
> > [2]
> >
> https://github.com/openstack/trove/blob/master/etc/trove/trove-guestagent.conf.sample#L69
> > [3] http://en.wikipedia.org/wiki/Key_derivation_function
> > [4] http://en.wikipedia.org/wiki/PBKDF2
> > [5] https://gist.github.com/denismakogon/8823279
> > [6] https://blueprints.launchpad.net/trove/+spec/backup-encryption
> >
> > Best regards,
> > Denis Makogon
> > Mirantis, Inc.
> > Kharkov, Ukraine
> > www.mirantis.com
> > www.mirantis.ru
> > dmako...@mirantis.com
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-openstacksdk] First meeting scheduled

2014-02-11 Thread Boris Pavlovic
Jesse,

Why not next approach:
1) Switch all clients to apiclient[1] (internal unification)
2) Unify API of all clients (external unification)
3) Unify work with keystone
4) Graduate apiclient
5) Switch all clients to apiclient lib
6) Make one simple plugable mechanism. e.g. subclass based factory +
steavdore
7) Add one by one subclasse that present client in this factory
8) In one pretty day stop gates & switch to unified client.

This is actually step by step solution that works for community and could
be done independently by tons of developers.


[1]
https://github.com/openstack/oslo-incubator/tree/master/openstack/common/apiclient


Best regards,
Boris Pavlovic


On Wed, Feb 12, 2014 at 12:40 AM, Jesse Noller
wrote:

> As I said last week; we're ready to kickoff and have regular meetings for
> the "unified python SDK" project. The initial meeting is scheduled on the
> wiki:
>
> https://wiki.openstack.org/wiki/Meetings#python-openstacksdk_Meeting
>
> Date/Time: Feb. 19th - 19:00 UTC / 1pm CST
>
> IRC channel: #openstack-meeting-3
>
> Meeting Agenda:
> https://wiki.openstack.org/wiki/Meetings/PythonOpenStackSDK
>
> About the project:
> https://wiki.openstack.org/wiki/PythonOpenStackSDK
>
> If you have questions, all of us lurk in #openstack-sdks on freenode!
>
> See you then.
>
> Jesse
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Backup/Restore encryption/decryption issue

2014-02-11 Thread Michael Basnight
Denis Makogon  writes:

> Goodday, OpenStack DВaaS community.
>
>
> I'd like to start conversation about guestagent security issue related
> to backup/restore process. Trove guestagent service uses AES with 256 bit
> key (in CBC mode) [1] to encrypt backups which are stored at predefined
> Swift container.
>
> As you can see, password is defined in config file [2]. And here comes
> problem, this password is used for all tenants/projects that use Trove - it
> is a security issue. I would like to suggest Key derivation function [3]
> based on static attributes specific for each tenant/project (tenant_id).
> KDF would be based upon python implementation of PBKDF2 [4]. Implementation
> can be seen here [5].

I do not want to see us writing our own crypto code in Trove. Id much
rather us use barbican for this, assuming it fits the bill. Lets do some
research on barbican before we go write this all.

>
> Also i'm looking forward to give user an ability to pass password for
> KDF that would deliver key for backup/restore encryption/decryption, if
> ingress password (from user) will be empty, guest will use static
> attributes of tenant (tenant_id).
>
> To allow backward compatibility, python-troveclient should be able to pass
> old password [1] to guestagent as one of parameters on restore call.
>
> Blueprint already have been registered in Trove launchpad space, [6].
>
> I also foresee porting this feature to oslo-crypt, as part of security
> framework (oslo.crypto) extensions.

Again, id rather see us use barbican for this instead of creating oslo-crypt.

>
> Thoughts ?
>
> [1]
> https://github.com/openstack/trove/blob/master/trove/guestagent/strategies/backup/base.py#L113-L116
> [2]
> https://github.com/openstack/trove/blob/master/etc/trove/trove-guestagent.conf.sample#L69
> [3] http://en.wikipedia.org/wiki/Key_derivation_function
> [4] http://en.wikipedia.org/wiki/PBKDF2
> [5] https://gist.github.com/denismakogon/8823279
> [6] https://blueprints.launchpad.net/trove/+spec/backup-encryption
>
> Best regards,
> Denis Makogon
> Mirantis, Inc.
> Kharkov, Ukraine
> www.mirantis.com
> www.mirantis.ru
> dmako...@mirantis.com
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


pgp5tp7rQlTnh.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] role of Domain in VPC definition

2014-02-11 Thread Martin, JC
Ravi,

It seems that the following Blueprint 
https://wiki.openstack.org/wiki/Blueprint-aws-vpc-support

has been approved. 

However, I cannot find a discussion with regard to the merit of using project 
vs. domain, or other mechanism for the implementation.

I have an issue with this approach as it prevents tenants within the same 
domain sharing the same VPC to have projects.

As an example, if you are a large organization on AWS, it is likely that you 
have a large VPC that will be shred by multiple projects. With this proposal, 
we loose that capability, unless I missed something.

JC

On Dec 19, 2013, at 6:10 PM, Ravi Chunduru  wrote:

> Hi,
>   We had some internal discussions on role of Domain and VPCs. I would like 
> to expand and understand community thinking of Keystone domain and VPCs.
> 
> Is VPC equivalent to Keystone Domain?
> 
> If so, as a public cloud provider - I create a Keystone domain and give it to 
> an organization which wants a virtual private cloud.
> 
> Now the question is if that organization wants to have  departments wise 
> allocation of resources it is becoming difficult to visualize with existing 
> v3 keystone constructs.
> 
> Currently, it looks like each department of an organization cannot have their 
> own resource management with in the organization VPC ( LDAP based user 
> management, network management or dedicating computes etc.,) For us, 
> Openstack Project does not match the requirements of a department of an 
> organization.
> 
> I hope you guessed what we wanted - Domain must have VPCs and VPC to have 
> projects.
> 
> I would like to know how community see the VPC model in Openstack.
> 
> Thanks,
> -Ravi.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] First meeting scheduled

2014-02-11 Thread Jesse Noller
As I said last week; we’re ready to kickoff and have regular meetings for the 
“unified python SDK” project. The initial meeting is scheduled on the wiki:

https://wiki.openstack.org/wiki/Meetings#python-openstacksdk_Meeting

Date/Time: Feb. 19th - 19:00 UTC / 1pm CST 

IRC channel: #openstack-meeting-3

Meeting Agenda: 
https://wiki.openstack.org/wiki/Meetings/PythonOpenStackSDK

About the project: 
https://wiki.openstack.org/wiki/PythonOpenStackSDK

If you have questions, all of us lurk in #openstack-sdks on freenode!

See you then.

Jesse
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Support for multiple provider networks with same VLAN segmentation id

2014-02-11 Thread Aaron Rosen
I believe it would need to be like:

 network_vlan_ranges = physnet1:100:300, phynet2:100:300, phynet3:100:300

Additional comments inline:

On Mon, Feb 10, 2014 at 8:49 PM, Vinay Bannai  wrote:

> Bob and Kyle,
>
> Thanks for your review.
> We looked at this option and it seems it might meet our needs. Here is
> what we intend to do:
>
> Let's say we have three racks (each rack supports three VLANs - 100, 200
> and 300).
> We create the following config file for the neutron server
>
>
>
>
> tenant_network_type = vlan
>  network_vlan_ranges = physnet1:100:300
>  network_vlan_ranges = phynet2:100:300
>  network_vlan_ranges = phynet3:100:300
>  integration_bridge = br-int
>  bridge_mappings = physnet1:br-eth1, physnet2:br-eth1, physnet3:br-eth1
> Is this what you meant?
>
> Vinay
>
>
> On Sun, Feb 9, 2014 at 6:03 PM, Robert Kukura  wrote:
>
>> On 02/09/2014 12:56 PM, Kyle Mestery wrote:
>> > On Feb 6, 2014, at 5:24 PM, Vinay Bannai  wrote:
>> >
>> >> Hello Folks,
>> >>
>> >> We are running into a situation where we are not able to create
>> multiple provider networks with the same VLAN id. We would like to propose
>> a solution to remove this restriction through a configuration option. This
>> approach would not conflict with the present behavior where it is not
>> possible to create multiple provider networks with the same VLAN id.
>> >>
>> >> The changes should be minimal and would like to propose it for the
>> next summit. The use case for this need is documented in the blueprint
>> specification.
>> >> Any feedback or comments are welcome.
>> >>
>> >>
>> https://blueprints.launchpad.net/neutron/+spec/duplicate-providernet-vlans
>> >>
>> > Hi Vinay:
>> >
>> > This problem seems straightforward enough, though currently you are
>> right
>> > in that we don't allow multiple Neutron networks to have the same
>> segmentation
>> > ID. I've added myself as approver for this BP and look forward to
>> further
>> > discussions of this before and during the upcoming Summit!
>>
>>
I kind of feel like allowing a vlan to span multiple networks is kind of
wonky. I feel like a better abstraction would be if we had better access
control over shared networks between tenants. This way we could explicitly
allow two tenants to share a network. Is this the problem you are trying to
solve though doing it with the same vlan?  How do you plan on enforcing
that mac_addresses are unique on the same physical network?

Multiple networks with network_type of 'vlan' are already allowed to
>> have the same segmentation ID with the ml2, openvswitch, or linuxbridge
>> plugins - the networks just need to have different physical_network
>> names.
>
>
This is the same for the NSX plugin as well.


> If they have the same network_type, physical_network, and
>> segmentation_id, they are the same network. What else would distinguish
>> them from each other?
>>
>> Could your use case be addressed by simply using different
>> physical_network names for each rack? This would provide independent
>> spaces of segmentation_ids for each.
>>
>> -Bob
>>
>> >
>> > Thanks!
>> > Kyle
>> >
>> >> Thanks
>> >> --
>> >> Vinay Bannai
>> >> Email: vban...@gmail.com
>> >> Google Voice: 415 938 7576
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Vinay Bannai
> Email: vban...@gmail.com
> Google Voice: 415 938 7576
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [group-policy] Changing the meeting time

2014-02-11 Thread Kyle Mestery
FYI, I’ve made the change on the meeting pages as well [1]. The Neutron
Group Policy meeting is now at 1700UTC Thursday’s on #openstack-meeting-alt.

Thanks!
Kyle

[1] 
https://wiki.openstack.org/wiki/Meetings#Neutron_Group_Policy_Sub-Team_Meeting

On Feb 11, 2014, at 11:30 AM, Sumit Naiksatam  wrote:

> Hi Kyle,
> 
> The new time sounds good to me as well, thanks for initiating this.
> 
> ~sumit.
> 
> On Tue, Feb 11, 2014 at 9:02 AM, Stephen Wong  wrote:
>> Hi Kyle,
>> 
>>Almost missed this - sounds good to me.
>> 
>> Thanks,
>> - Stephen
>> 
>> 
>> 
>> On Mon, Feb 10, 2014 at 7:30 PM, Kyle Mestery 
>> wrote:
>>> 
>>> Folks:
>>> 
>>> I'd like to propose moving the Neutron Group Policy meeting going
>>> forward, starting with this Thursday. The meeting has been at 1600
>>> UTC on Thursdays, I'd like to move this to 1700UTC Thursdays
>>> on #openstack-meeting-alt. If this is a problem for anyone who
>>> regularly attends this meeting, please reply here. If I don't hear
>>> any replies by Wednesday, I'll officially move the meeting.
>>> 
>>> Thanks!
>>> Kyle
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][TripleO] NIC bonding for OpenStack

2014-02-11 Thread Dan Prince


- Original Message -
> From: "Andrey Danin" 
> To: openstack-dev@lists.openstack.org, "fuel-dev" 
> 
> Sent: Tuesday, February 11, 2014 11:42:46 AM
> Subject: [openstack-dev] [Fuel][TripleO] NIC bonding for OpenStack
> 
> Hi Openstackers,
> 
> We are working on link aggregation support in Fuel. We wonder what are the
> most desirable types of bonding now in datacenters. We had some issues (see
> below) with OVS bond in LACP mode, and it turned out that standard Linux
> bonding (attached to OVS bridges) was a better option in our setup.
> 
> I want to hear your opinion, guys. What types of bonding do you think are
> better now in terms of stability and performance, so that we can properly
> support them for OpenStack installations.
> 
> Also, we are wondering if there any plans to support bonding in TripleO,
> and how you guys would like to see it be implemented? What is the general
> approach for such complex network configurations for TripleO?

The OVS bonded approach could work quite well with TripleO since we already use 
an OVS bridge to provide network access. Right now we do most of our OVS 
network configuration via the ensure-bridge script/tool. It takes care of 
creating the OVS bridge and moving our physical NIC onto the bridge so that 
instances have network connectivity in our flat network. I recently re-factored 
this tool so that it uses persistent network configuration files:

 https://review.openstack.org/#/c/69918/

Once we get that in we could simply add in the extra OVSBonding config as 
outlined here and I think we'd have it:

 https://github.com/osrg/openvswitch/blob/master/rhel/README.RHEL#L108



As for standard linux bonding we could do that too... perhaps via another tool 
that runs before ensure-bridge does its thing as we'd still want the ability to 
put the bonded interface on an OVS bridge.

Dan

> We would love to extract this piece from Fuel and make it fully independent, 
> so that the
> larger community can use it and we could work collaboratively on it. Right
> now it is actually already granular and can be reused in other projects,
> and implemented as a separated puppet module:
> https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/l23network
> .
> 
> Some links with our design considerations:
> 
> https://etherpad.openstack.org/p/fuel-bonding-design
> 
> https://blueprints.launchpad.net/fuel/+spec/nics-bonding-enabled-from-ui
> 
> 
> 
> UI mockups:
> 
> https://drive.google.com/file/d/0Bw6txZ1qvn9CaDdJS0ZUcW1DeDg/edit?usp=sharing
> 
> Description of the problem with LACP we ran into:
> https://etherpad.openstack.org/p/LACP_issue
> 
> Thanks,
> 
> 
> --
> Andrey Danin
> ada...@mirantis.com
> skype: gcon.monolake
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Meeting Tuesday February 11th at 19:00 UTC

2014-02-11 Thread Elizabeth Krumbach Joseph
On Mon, Feb 10, 2014 at 7:48 AM, Elizabeth Krumbach Joseph
 wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting tomorrow, Tuesday February 11th, at 19:00 UTC in
> #openstack-meeting

Thanks to everyone who attended, minutes and logs now available here:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-11-19.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-11-19.01.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-11-19.01.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-notify] notifications consumed by multiple subscribers

2014-02-11 Thread Doug Hellmann
On Tue, Feb 11, 2014 at 1:28 PM, Sanchez, Cristian A <
cristian.a.sanc...@intel.com> wrote:

> Hi,
> I'm planning to use oslo.notify mechanisms to implement a climate
> blueprint: https://blueprints.launchpad.net/climate/+spec/notifications.
> Ideally, the notifications sent by climate should be received by multiple
> services subscribed to the same topic. Is that possible with oslo.notify?
> And moreover, is there any mechanism for removing items from the queue? Or
> should one subscriber be responsible for removing items from it?
>

Each service should subscribe to the same exchange and topic using a
different queue name to receive all of the messages.

The code to set up the listeners is still under review for oslo.messaging.
See the patch series starting with https://review.openstack.org/#/c/57275/16

Doug



>
> Thanks
>
> Cristian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

2014-02-11 Thread Doug Hellmann
On Tue, Feb 11, 2014 at 9:59 AM, Serg Melikyan wrote:

> Doug, thank you for your response!
>
> With moving to *oslo.messaging *I think we need to carefully
> re-evaluate our communication patterns and try to either fit them to
> RPC/Notification, or work on extending *oslo.messaging *with some new
> ideas. Adopting a new library is always a challenge.
>

That makes sense.

>The RPC modules expect a response and provide timeout behavior, but the
> notifications don't require that. Perhaps you could send those messages as
> notifications?
> Yes, notifications can be used as replacement to sending messages, but I
> think this may be even broken design than try to replace messaging with RPC
> without actual communications re-design.
>
> >The library is meant to be reusable, so if the API does not support your
> use case please work with us to extend it rather than hacking around it or
> forking it
> Sure, I have no intents to hack/change/extend/fork anything related to
> oslo.messaging without consulting with oslo team first. I understand that
> making tools used by OpenStack better is very important to all of us.
>

OK, great!

When talked about 'hacking' I had in mind a quick way to move to
> *oslo.messaging* without losing any of existing functionality. Since
> oslo.messaging meant to hide away underlying implementation and message
> queue related things I am not sure that adding such specific thing as
> queue-ttl for selected queues without extending some classes from driver
> layer is possible.
>

As long as we can find a way to make it backwards-compatible, for the
projects that don't use the feature, I don't see a problem in principle.
Experimenting within the Murano repository maybe the easiest way to work
out what those APIs need to be. I'm glad to hear you would contribute that
work back upstream, though. :-)

Doug



>
>
>
> On Tue, Feb 11, 2014 at 6:06 PM, Doug Hellmann <
> doug.hellm...@dreamhost.com> wrote:
>
>>
>>
>>
>> On Tue, Feb 11, 2014 at 8:05 AM, Serg Melikyan wrote:
>>
>>> oslo.messaging  is a
>>> library that provides RPC and Notifications API, they are part of the same
>>> library for mostly historical reasons. One of the major goals of
>>> *oslo.messaging* is to provide clean RPC and Notification API without
>>> any trace of messaging queue concepts (but two of most advanced drivers
>>> used by oslo.messaging is actually based on AMQP: RabbitMQ and QPID).
>>>
>>> We were designing Murano on messaging queue concepts using some
>>> AMQP/RabbitMQ specific features, like queue TTL. Since we never considered
>>> communications between our components in terms of RPC or Notifications and
>>> always thought about them as message exchange through broker it has
>>> influenced our components architecture. In Murano we use simple 
>>> wrapper
>>>  around Puka  (RabbitMQ client with most
>>> simple and thoughtful async model) that is used in all our components. We 
>>> forked
>>> Puka  since we had specific
>>> requirements to SSL and could not yet merge our 
>>> work back
>>> to master.
>>>
>>> Can we abandon our own 
>>> wrapperaround
>>>  our own
>>> fork of Puka  in favor of
>>> *oslo.messaging*? *Yes*, but this migration may be tricky. I believe we
>>> can migrate to *oslo.messaging* in a week or so*.*
>>>
>>> I had played with *oslo.messaging* emulating our current communication
>>> patterns with *oslo.messaging*, and I am certain that current
>>> implementation can be migrated to *oslo.messaging**. *But I am not sure
>>> that *oslo.messaging* may be easily suited to all future use-cases that
>>> we plan to cover in a few next releases without major contributions.
>>> Please, try to respond with any questions related to *oslo.messaging* 
>>> implementation
>>> and how it can be fitted with certain use-case.
>>>
>>> Below, I tried to describe our current use-cases and what specific MQ
>>> features we are using, how they may be implemented with
>>> *oslo.messaging *and with what limitations we will face.
>>>
>>> Use-Case
>>> Murano has several components with communications between them based on
>>> messaging queue:
>>> *murano-api* -> *murano-conductor:*
>>>
>>>1. *murano-api* sends deployment tasks to murano-conductor
>>>
>>> *murano-conductor* -> *murano-api:*
>>>
>>>1. *murano-conductor* reports to *murano-api* task progress
>>>during processing
>>>2. after processing, *murano-conductor* sends results to *murano-api*
>>>
>>> *murano-conductor *->* murano-agent:*
>>>
>>>1. during task processing *murano-conductor* sends execution plans
>>>with commands to *murano-agent.*
>>>
>>> Note: each of mentioned components above may have m

[openstack-dev] [oslo-notify] notifications consumed by multiple subscribers

2014-02-11 Thread Sanchez, Cristian A
Hi,
I’m planning to use oslo.notify mechanisms to implement a climate blueprint: 
https://blueprints.launchpad.net/climate/+spec/notifications. Ideally, the 
notifications sent by climate should be received by multiple services 
subscribed to the same topic. Is that possible with oslo.notify? And moreover, 
is there any mechanism for removing items from the queue? Or should one 
subscriber be responsible for removing items from it?

Thanks

Cristian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Sylvain Bauza
2014-02-11 18:28 GMT+01:00 Yathiraj Udupi (yudupi) :

>
>
>  Thanks for your pointers about Climate.  I will take a closer look at it
> and try it out.  So after a reservation lease for a VM is made by Climate,
> who acts on it to finally instantiate the VM ? Is it Climate or Nova should
> act on the lease to finally provision the VM.
>


That depends on the plugin mapped with the reservation type. The current
plugin mapped for virtual instances does unshelve the instance at lease
start (because Nova extension shelved it at lease creation).

If you don't like this behaviour, that's quite easy to create another
plugin for another reservation type. When the lease starts, it does run the
lease_create() action defined by the plugin. So, if you don't like
shelving/unshelving, that's not a problem, feel free to add a blueprint
about your needs or create your own plugin.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-11 Thread Clint Byrum
Thanks Kevin, great summary.

This is beyond the scope of in-instance notification. I think this is
more like the generic notification API that Thomas Herve suggested in
the rolling updates thread. It can definitely use the same method for
its implementation, and I think it is adjacent to, and not in front of
or behind, the in-instance case.

Excerpts from Fox, Kevin M's message of 2014-02-11 09:22:28 -0800:
> Another scaling down/update use case:
> Say I have a pool of ssh servers for users to use (compute cluster login 
> nodes).
> Autoscaling up is easy. Just launch a new node and add it to the load 
> balancer.
> 
> Scaling down/updating is harder. It should ideally:
>  * Set the admin state on the load balancer for the node, ensuring no new 
> connections go to the node.
>  * Contact the node or the balancer and ensure all outstanding connections 
> are complete. Wait for it. This could be a long long time.
>  * Destroy or update the node
> 
> Thanks,
> Kevin
> 
> From: Clint Byrum [cl...@fewbar.com]
> Sent: Tuesday, February 11, 2014 8:13 AM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Heat] in-instance update hooks
> 
> Excerpts from Steven Dake's message of 2014-02-11 07:19:19 -0800:
> > On 02/10/2014 10:22 PM, Clint Byrum wrote:
> > > Hi, so in the previous thread about rolling updates it became clear that
> > > having in-instance control over updates is a more fundamental idea than
> > > I had previously believed. During an update, Heat does things to servers
> > > that may interrupt the server's purpose, and that may cause it to fail
> > > subsequent things in the graph.
> > >
> > > Specifically, in TripleO we have compute nodes that we are managing.
> > > Before rebooting a machine, we want to have a chance to live-migrate
> > > workloads if possible, or evacuate in the simpler case, before the node
> > > is rebooted. Also in the case of a Galera DB where we may even be running
> > > degraded, we want to ensure that we have quorum before proceeding.
> > >
> > > I've filed a blueprint for this functionality:
> > >
> > > https://blueprints.launchpad.net/heat/+spec/update-hooks
> > >
> > > I've cobbled together a spec here, and I would very much welcome
> > > edits/comments/etc:
> > >
> > > https://etherpad.openstack.org/p/heat-update-hooks
> > Clint,
> >
> > I read through your etherpad and think there is a relationship to a use
> > case for scaling.  It would be sweet if both these cases could use the
> > same model and tooling.  At the moment in an autoscaling group, when you
> > want to scale "down" there is no way to quiesce the node before killing
> > the VM.  It is the same problem you have with Galera, except your
> > scenario involves an update.
> >
> > I'm not clear how the proposed design could be made to fit this
> > particular use case.  Do you see a way it can fill both roles so we
> > don't have two different ways to do essentially the same thing?
> >
> 
> I see scaling down as an update to a nested stack which contains all of
> the members of the scaling group.
> 
> So if we start focusing on scaling stacks, as Zane suggested in
> the rolling updates thread, then the author of said stack would add
> action_hooks to the resources that are scaled.
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Dina Belova
>
> So after a reservation lease for a VM is made by Climate, who acts on it
> to finally instantiate the VM ? Is it Climate or Nova should act on the
> lease to finally provision the VM.


Now to take resources for VM it's actually created and then immediately
shelved, not to be in active state and not to actually work while it's not
needed. Climate then uses Keystone trusts to unshelve it the time lease
actually starts.

Thanks,
Dina


On Tue, Feb 11, 2014 at 9:28 PM, Yathiraj Udupi (yudupi)
wrote:

>  hi Sylvain and Dina,
>
>  Thanks for your pointers about Climate.  I will take a closer look at it
> and try it out.  So after a reservation lease for a VM is made by Climate,
> who acts on it to finally instantiate the VM ? Is it Climate or Nova should
> act on the lease to finally provision the VM.
>
>  Thanks,
> Yathi.
>
>   On 2/11/14, 8:42 AM, "Sylvain Bauza"  wrote:
>
>   Le 11/02/2014 17:23, Yathiraj Udupi (yudupi) a écrit :
>
> Hi Dina,
>
>  Thanks for note about Climate logic.  This is something that will be
> very useful, when we will have to schedule from Nova multiple instances (of
> potentially different flavors) as a single request.  If the Solver
> Scheduler, can make a request to the Climate service to reserve the
> resources soon after the placement decision has been made, then the nova
> provisioning logic can handle the resource provisioning using the climate
> reserved leases.  Regarding Solver Scheduler for your reference, just sent
> another email about this with some pointers about it.  Otherwise this is
> the blueprint -
> https://blueprints.launchpad.net/nova/+spec/solver-scheduler
> I guess this is something to explore more and see how Nova provisioning
> logic to work with Climate leases. Or this is something that already works.
>  I need to find out more about Climate.
>
>  Thanks,
> Yathi.
>
>
>
> There are possibly 2 ways for creating a lease : either thru the CLI or by
> the python binding.
>
> We implemented these 2 possibilities within the current Climate 0.1
> release :
>  - a Nova extension plugin is responsible for creating the lease if a VM
> should be reserved (using the Climate pythonclient binding)
>  - an user can request for reserving a compute host using the Climate
> python client directly
>
> Both logics (VM and compute host) are actually referring to 2 distinct
> plugins in the Climate manager, so the actions are completely different.
>
> Based on your use-case, you could imagine a call from the SolverScheduler
> to Climate for creating a lease containing multiple VM reservations, and
> either you would use the Climate VM plugin or you would use a dedicated
> plugin if your need is different.
>
> I don't think that's a huge volume of work, as Climate already defines and
> implements the main features that you need.
>
> -Sylvain
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Proposal for model

2014-02-11 Thread WICKES, ROGER
[Roger] Hi Stephen! Great job! Obviously your experience is both awesome and 
essential here.

I would ask that we add a historical archive (physically implemented as a log 
file, probably) 
object to your model. When you mentioned sending data off to Ceilometer, that 
triggered 
me to think about one problem I have had to deal with is "what packet went 
where? " 
in diagnosing errors usually related to having a bug on 1 out of 5 
load-balanced servers, 
usually because of a deployed version mismatch, but could also be due to virus. 
When our
customer sees "hey every now and then this image is broken on a web page" that 
points
us to an inconsistent farm, and having the ability to trace or see which server 
got that customer's
packet (routed to by the LB) would really help in pinpointing the errant 
server.  

> Benefits of a new model
>
> If we were to adopt either of these data models, this would enable us to
> eventually support the following feature sets, in the following ways (for
> example):
>
> Automated scaling of load-balancer services
>
[Roger] Would the Heat module be called on to add more LB's to the farm?

> I talked about horizontal scaling of load balancers above under "High
> Availability," but, at least in the case of a software appliance, vertical
> scaling should also be possible in an active-standby cluster_model by
**

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [group-policy] Changing the meeting time

2014-02-11 Thread Sumit Naiksatam
Hi Kyle,

The new time sounds good to me as well, thanks for initiating this.

~sumit.

On Tue, Feb 11, 2014 at 9:02 AM, Stephen Wong  wrote:
> Hi Kyle,
>
> Almost missed this - sounds good to me.
>
> Thanks,
> - Stephen
>
>
>
> On Mon, Feb 10, 2014 at 7:30 PM, Kyle Mestery 
> wrote:
>>
>> Folks:
>>
>> I'd like to propose moving the Neutron Group Policy meeting going
>> forward, starting with this Thursday. The meeting has been at 1600
>> UTC on Thursdays, I'd like to move this to 1700UTC Thursdays
>> on #openstack-meeting-alt. If this is a problem for anyone who
>> regularly attends this meeting, please reply here. If I don't hear
>> any replies by Wednesday, I'll officially move the meeting.
>>
>> Thanks!
>> Kyle
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Yathiraj Udupi (yudupi)
hi Sylvain and Dina,

Thanks for your pointers about Climate.  I will take a closer look at it and 
try it out.  So after a reservation lease for a VM is made by Climate, who acts 
on it to finally instantiate the VM ? Is it Climate or Nova should act on the 
lease to finally provision the VM.

Thanks,
Yathi.

On 2/11/14, 8:42 AM, "Sylvain Bauza" 
mailto:sylvain.ba...@bull.net>> wrote:

Le 11/02/2014 17:23, Yathiraj Udupi (yudupi) a écrit :
Hi Dina,

Thanks for note about Climate logic.  This is something that will be very 
useful, when we will have to schedule from Nova multiple instances (of 
potentially different flavors) as a single request.  If the Solver Scheduler, 
can make a request to the Climate service to reserve the resources soon after 
the placement decision has been made, then the nova provisioning logic can 
handle the resource provisioning using the climate reserved leases.  Regarding 
Solver Scheduler for your reference, just sent another email about this with 
some pointers about it.  Otherwise this is the blueprint - 
https://blueprints.launchpad.net/nova/+spec/solver-scheduler
I guess this is something to explore more and see how Nova provisioning logic 
to work with Climate leases. Or this is something that already works.  I need 
to find out more about Climate.

Thanks,
Yathi.



There are possibly 2 ways for creating a lease : either thru the CLI or by the 
python binding.

We implemented these 2 possibilities within the current Climate 0.1 release :
 - a Nova extension plugin is responsible for creating the lease if a VM should 
be reserved (using the Climate pythonclient binding)
 - an user can request for reserving a compute host using the Climate python 
client directly

Both logics (VM and compute host) are actually referring to 2 distinct plugins 
in the Climate manager, so the actions are completely different.

Based on your use-case, you could imagine a call from the SolverScheduler to 
Climate for creating a lease containing multiple VM reservations, and either 
you would use the Climate VM plugin or you would use a dedicated plugin if your 
need is different.

I don't think that's a huge volume of work, as Climate already defines and 
implements the main features that you need.

-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Proposing Sergey Lukjanov for infra-core

2014-02-11 Thread Sergey Lukjanov
Thank you all!

I'm glad to help infra team with extremely growing number of CRs to their
projects.


On Tue, Feb 11, 2014 at 2:24 PM, Thierry Carrez wrote:

> James E. Blair wrote:
> > I'm very pleased to propose that we add Sergey Lukjanov to the
> > infra-core team.
> >
> > He is among the top reviewers of projects in openstack-infra, and is
> > very familiar with how jenkins-job-builder and zuul are used and
> > configured.  He has done quite a bit of work in helping new projects
> > through the process and ensuring that changes to the CI system are
> > correct.  In addition to providing very helpful reviews he has also
> > contributed significant patches to our Python projects illustrating a
> > high degree of familiarity with the code base and project direction.
> > And as a bonus, we're all looking forward to once again having an
> > infra-core member in a non-US time zone!
>
> +1!
>
> My only fear is that Sergey is going in too many directions at the same
> time (remember he is also the Savanna^WSahara/Caravan/Batyr/Slonik PTL)
> and might burn out. But so far he has been gracefully handling the
> increasing load... so let's see how that goes :)
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][TripleO] NIC bonding for OpenStack

2014-02-11 Thread Clint Byrum
Excerpts from Andrey Danin's message of 2014-02-11 08:42:46 -0800:
> Hi Openstackers,
> 
> We are working on link aggregation support in Fuel. We wonder what are the
> most desirable types of bonding now in datacenters. We had some issues (see
> below) with OVS bond in LACP mode, and it turned out that standard Linux
> bonding (attached to OVS bridges) was a better option in our setup.
> 
> I want to hear your opinion, guys. What types of bonding do you think are
> better now in terms of stability and performance, so that we can properly
> support them for OpenStack installations.
> 
> Also, we are wondering if there any plans to support bonding in TripleO,
> and how you guys would like to see it be implemented? What is the general
> approach for such complex network configurations for TripleO? We would love
> to extract this piece from Fuel and make it fully independent, so that the
> larger community can use it and we could work collaboratively on it. Right
> now it is actually already granular and can be reused in other projects,
> and implemented as a separated puppet module:
> https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/l23network

For Nova baremetal and Ironic, they're going to feed MACs to Neutron which
it would then serve DHCP for. I am not sure how that relates to bonding,
and if the MAC would change for DHCP requests after bonding is configured,
but that would have to be addressed and tested.

Otherwise, it would work largely the same way I think. Just configure the
image to read config parameters from the Heat metadata and subsequently
configure bonding. It is even possible that the facter cfn plugin could
be made to work with a little bit of modification, perhaps allowing you
to forklift the puppet modules into a diskimage-builder element:

https://github.com/oppian/oppian-puppet-modules/blob/master/modules/cfn/lib/facter/cfn.rb

I've heard of no plans for supporting this configuration of servers,
but I see no reason it couldn't be added as an option if it was important
to deployers.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Idea: Floating IPv6 - "Without any kind of NAT"

2014-02-11 Thread Veiga, Anthony

Hello Stackers!

It is very nice to watch the OpenStack evolution in IPv6! Great job guys!!

Thanks!



I have another idea:

"Floating IP" for IPv6, or just "Floating IPv6"


With IPv4, as we know, OpenStack have a feature called "Floating IP", which is 
basically a 1-to-1 NAT rule (within tenant's Namespace q-router). In IPv4 
networks, we need this "Floating IP" attached to a Instance, to be able to 
reach it from the Internet (I don't like it). But, what is the use case for a 
"Floating IP" when you have no NAT* (as it is with IPv6)?!

There are definitely cases for it, and we were planning to address it in a 
future release.


At first, when with IPv6, I was planning to disable the "Floating IP" feature 
entirely, by removing it from Dashboard and from APIs (even for IPv4, if FWaaS 
can in somehow, be able to manage q-router IPv4 NAT rules, and not only the 
"iptables filter table") and, I just had an idea!

For IPv6, the "Floating IP" can still be used to allocate more (and more) IPs 
to a Instance BUT, instead of creating a NAT rule (like it is for IPv4), it 
will configure the DNSMasq (or something like it) to provide more IPv6 address 
per MAC / Instance. That way, we can virtually allocate unlimited IPs (v6) for 
each Instance!

It will be pretty cool to see the attached "Floating IPv6", literally "floating 
around" the tenant subnet, appearing inside the Instances itself (instead of 
inside the tenant's Namespace), so, we'll be able to see it (the Floating IPv6) 
with "ip -6 address" command within the attached Instance!

The only problem I see with this is that, for IPv4, the allocated "Floating 
IPs" come from the "External Network" (neutron / --allocation-pool) and, for 
IPv6, it will come from the tenant's IPv6 subnet itself... I think... Right?!

I think the real issue here is how neutron handles these from a network/port 
perspective.  In the IPv4 case, the IPs are from and entirely separate block.  
I think that should probably stay the same for IPv6, since you have twofold 
problems:


  1.  You've already issued an RA on the network for a specific address scope.  
Another one, when an interface is already addressed, won't trigger adding a new 
address.
  2.  Routing.  Which address should be the source address? If it's on the same 
network, the majority of distributions will ignore further routes and addresses 
for the purpose of sourcing packets.

Because of the above, it would make sense for floats to end up on a "floating 
IP" subnet.  However, we'd go back to earlier neutron issues about having 
multiple subnets on a network.  So then, we end up with a new network, a new 
subnet and a new port.  My personal vote is to go this route.


---
Why I want tons of IPv6 within each Instance?

A.: Because we can! I mean, we can go back to the days when we had 1 website 
per 1 public IP (i.e. using IP-Based Virtual Hosts with Apache - I prefer this 
approach).

Also, we can try to turn the "Floating IPv6", in some kind of "Floating IPv6 
Range", this way, we can for example, allocate millions of IPs per Instance, 
like this in DHCPv6: "range6 2001:db8:1:1::1000 2001:db8:1:1000:1000;"...

I'd also argue for security/policy/reserved addressing reasons.  Also, let's 
not get into the habit of assigning loads of addresses.  Each network needs to 
be a /64, and going all out means you're back to the exhaustion issue we have 
in IPv4 ;)

---

NOTE: I prefer multiple IPs per Instance, instead of 1 IP per Instance, when 
using VT, unless, of course, the Instances are based on Docker, so, with it, I 
can easily see millions of tiny instances, each of it with its own IPv6 
address, without the overhead of virtualized environment. So, with Docker, this 
"Floating IPv6 Range" doesn't seems to be useful...


* I know that there is NAT66 out there but, who is actually using it?! I'll 
never use this thing. Personally I dislike NAT very much, mostly because it 
breaks the end-to-end Internet connectivity, effectively kicking you out from 
the real Internet, and it is just a workaround created to deal with IPv4 
exhaustion.

As a long-time IPv6 engineer, I will advocate against NAT wherever I possibly 
can.  +1 to this!



BTW, please guys, let me know if this isn't the right place to post "ideas for 
OpenStack / feature requests"... I don't want to bloat this list with 
undesirable messages.

It absolutely is!  Also, you might want to join the discussions for the IPv6 
sub-team: https://wiki.openstack.org/wiki/Meetings/Neutron-IPv6-Subteam



Best Regards,
Thiago Martins

-Anthony Veiga
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-11 Thread Fox, Kevin M
Another scaling down/update use case:
Say I have a pool of ssh servers for users to use (compute cluster login nodes).
Autoscaling up is easy. Just launch a new node and add it to the load balancer.

Scaling down/updating is harder. It should ideally:
 * Set the admin state on the load balancer for the node, ensuring no new 
connections go to the node.
 * Contact the node or the balancer and ensure all outstanding connections are 
complete. Wait for it. This could be a long long time.
 * Destroy or update the node

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Tuesday, February 11, 2014 8:13 AM
To: openstack-dev
Subject: Re: [openstack-dev] [Heat] in-instance update hooks

Excerpts from Steven Dake's message of 2014-02-11 07:19:19 -0800:
> On 02/10/2014 10:22 PM, Clint Byrum wrote:
> > Hi, so in the previous thread about rolling updates it became clear that
> > having in-instance control over updates is a more fundamental idea than
> > I had previously believed. During an update, Heat does things to servers
> > that may interrupt the server's purpose, and that may cause it to fail
> > subsequent things in the graph.
> >
> > Specifically, in TripleO we have compute nodes that we are managing.
> > Before rebooting a machine, we want to have a chance to live-migrate
> > workloads if possible, or evacuate in the simpler case, before the node
> > is rebooted. Also in the case of a Galera DB where we may even be running
> > degraded, we want to ensure that we have quorum before proceeding.
> >
> > I've filed a blueprint for this functionality:
> >
> > https://blueprints.launchpad.net/heat/+spec/update-hooks
> >
> > I've cobbled together a spec here, and I would very much welcome
> > edits/comments/etc:
> >
> > https://etherpad.openstack.org/p/heat-update-hooks
> Clint,
>
> I read through your etherpad and think there is a relationship to a use
> case for scaling.  It would be sweet if both these cases could use the
> same model and tooling.  At the moment in an autoscaling group, when you
> want to scale "down" there is no way to quiesce the node before killing
> the VM.  It is the same problem you have with Galera, except your
> scenario involves an update.
>
> I'm not clear how the proposed design could be made to fit this
> particular use case.  Do you see a way it can fill both roles so we
> don't have two different ways to do essentially the same thing?
>

I see scaling down as an update to a nested stack which contains all of
the members of the scaling group.

So if we start focusing on scaling stacks, as Zane suggested in
the rolling updates thread, then the author of said stack would add
action_hooks to the resources that are scaled.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] www.xrefs.info: addition of openflow and SDN

2014-02-11 Thread John Smith
I setup http://www.xrefs.info for open source developers to do cross
reference search based on OpenGrok. It covers projects like linux
kernel from version 0.01 to latest, android, linux packages, BSD,
cloud computing, big data, SDN etc.. Check it out...Let me know if you
have any suggestions or comments...Thx. xrefs.info Admin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [group-policy] Changing the meeting time

2014-02-11 Thread Stephen Wong
Hi Kyle,

Almost missed this - sounds good to me.

Thanks,
- Stephen



On Mon, Feb 10, 2014 at 7:30 PM, Kyle Mestery wrote:

> Folks:
>
> I'd like to propose moving the Neutron Group Policy meeting going
> forward, starting with this Thursday. The meeting has been at 1600
> UTC on Thursdays, I'd like to move this to 1700UTC Thursdays
> on #openstack-meeting-alt. If this is a problem for anyone who
> regularly attends this meeting, please reply here. If I don't hear
> any replies by Wednesday, I'll officially move the meeting.
>
> Thanks!
> Kyle
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] Idea: Floating IPv6 - "Without any kind of NAT"

2014-02-11 Thread Martinx - ジェームズ
Hello Stackers!

It is very nice to watch the OpenStack evolution in IPv6! Great job guys!!


I have another idea:

"Floating IP" for IPv6, or just "Floating IPv6"


With IPv4, as we know, OpenStack have a feature called "Floating IP", which
is basically a 1-to-1 NAT rule (within tenant's Namespace q-router). In
IPv4 networks, we need this "Floating IP" attached to a Instance, to be
able to reach it from the Internet (*I don't like it*). But, what is the
use case for a "Floating IP" when you have *no NAT** (as it is with IPv6)?!

At first, when with IPv6, I was planning to disable the "Floating IP"
feature entirely, by removing it from Dashboard and from APIs (even for
IPv4, if FWaaS can in somehow, be able to manage q-router IPv4 NAT rules,
and not only the "iptables filter table") and, I just had an idea!

For IPv6, the "Floating IP" can still be used to allocate more (and more)
IPs to a Instance BUT, instead of creating a NAT rule (like it is for
IPv4), it will configure the DNSMasq (or something like it) to provide more
IPv6 address per MAC / Instance. That way, we can virtually
allocate unlimited IPs (v6) for each Instance!

It will be pretty cool to see the attached "Floating IPv6", literally
"floating around" the tenant subnet, appearing inside the Instances itself
(instead of inside the tenant's Namespace), so, we'll be able to see it
(the Floating IPv6) with "ip -6 address" command within the attached
Instance!

The only problem I see with this is that, for IPv4, the allocated
"Floating IPs"
come from the "External Network" (neutron / --allocation-pool) and, for IPv6,
it will come from the tenant's IPv6 subnet itself... I think... Right?!

---
Why I want tons of IPv6 within each Instance?

A.: Because we can! I mean, we can go back to the days when we had 1
website per 1 public IP (i.e. using IP-Based Virtual Hosts with Apache - I
prefer this approach).

Also, we can try to turn the "Floating IPv6", in some kind of "Floating
IPv6 Range", this way, we can for example, allocate millions of IPs per
Instance, like this in DHCPv6: "range6 2001:db8:1:1::1000
2001:db8:1:1000:1000;"...
---

NOTE: I prefer multiple IPs per Instance, instead of 1 IP per Instance,
when using VT, unless, of course, the Instances are based on Docker, so,
with it, I can easily see millions of tiny instances, each of it with its
own IPv6 address, without the overhead of virtualized environment. So, with
Docker, this "Floating IPv6 Range" doesn't seems to be useful...


* I know that there is NAT66 out there but, who is actually using it?! I'll
never use this thing. Personally I dislike NAT very much, mostly because it
breaks the end-to-end Internet connectivity, effectively kicking you out
from the real Internet, and it is just a workaround created to deal with
IPv4 exaustion.


BTW, please guys, let me know if this isn't the right place to post "ideas
for OpenStack / feature requests"... I don't want to bloat this list with
undesirable messages.


Best Regards,
Thiago Martins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Dina Belova
>
> This is something to explore to add in Nova, using
> a local service or external service (need to explore Climate).


I need to find out more about Climate.


Here is Climate Launchpad: https://launchpad.net/climate

That's still really young project, but I believe it'll have great future
speaking about resource reservation. So if you need some kind of
reservation logic, I also believe that should be implemented in Climate (as
it's proposed and currently implemented as Reservation-as-a-Service)

Thanks


On Tue, Feb 11, 2014 at 8:23 PM, Yathiraj Udupi (yudupi)
wrote:

>  Hi Dina,
>
>  Thanks for note about Climate logic.  This is something that will be
> very useful, when we will have to schedule from Nova multiple instances (of
> potentially different flavors) as a single request.  If the Solver
> Scheduler, can make a request to the Climate service to reserve the
> resources soon after the placement decision has been made, then the nova
> provisioning logic can handle the resource provisioning using the climate
> reserved leases.  Regarding Solver Scheduler for your reference, just sent
> another email about this with some pointers about it.  Otherwise this is
> the blueprint -
> https://blueprints.launchpad.net/nova/+spec/solver-scheduler
> I guess this is something to explore more and see how Nova provisioning
> logic to work with Climate leases. Or this is something that already works.
>  I need to find out more about Climate.
>
>  Thanks,
> Yathi.
>
>
>   On 2/11/14, 7:44 AM, "Dina Belova"  wrote:
>
>Like a restaurant reservation, it would "claim" the resources for use
>> by someone at a later date.  That way nobody else can use them.
>> That way the scheduler would be responsible for determining where the
>> resource should be allocated from, and getting a reservation for that
>> resource.  It would not have anything to do with actually instantiating the
>> instance/volume/etc.
>
>
>  Although I'm quite new to topic of Solver Scheduler, it seems to me that
> in that case you need to look on Climate project. It aims to provide
> resource reservation to OS clouds (and by resource I mean here
> instance/compute host/volume/etc.)
>
>  And Climate logic is like: create lease - get resources from common pool
> - do smth with them when lease start time will come.
>
>  I'll say one more time - I'm not really common with this discussion, but
> it looks like Climate might help here.
>
>  Thanks
> Dina
>
>
> On Tue, Feb 11, 2014 at 7:09 PM, Chris Friesen <
> chris.frie...@windriver.com> wrote:
>
>> On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:
>>
>>>  Second, there is nothing wrong with booting the instances (or

>>> instantiating other
>>>
 resources) as separate commands as long as we support some kind of
 reservation token.

>>>
>>> I'm not sure what reservation token would do, is it some kind of
>>> informing
>>> the scheduler that the resources would not be initiated until later ?
>>>
>>
>>  Like a restaurant reservation, it would "claim" the resources for use by
>> someone at a later date.  That way nobody else can use them.
>>
>> That way the scheduler would be responsible for determining where the
>> resource should be allocated from, and getting a reservation for that
>> resource.  It would not have anything to do with actually instantiating the
>> instance/volume/etc.
>>
>>
>>  Let's consider a following example:
>>>
>>> A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
>>> with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
>>> left, and another with 30 GB RAM left, using Filter Scheduler's default
>>> RamWeigher.
>>>
>>> If we pass the demand as two commands, there is a chance that the small
>>> VM
>>> arrives first. RamWeigher will put it in the 50 GB RAM host, which will
>>> be
>>> reduced to 30 GB RAM. Then, when the big VM request arrives, there will
>>> be
>>> no space left to host it. As a result, the whole demand is failed.
>>>
>>> Now if we can pass the two VMs in a command, SolverScheduler can put
>>> their
>>> constraints all together into one big LP as follow (x_uv = 1 if VM u is
>>> hosted in host v, 0 if not):
>>>
>>
>>  Yes.  So what I'm suggesting is that we schedule the two VMs as one call
>> to the SolverScheduler.  The scheduler then gets reservations for the
>> necessary resources and returns them to the caller.  This would be sort of
>> like the existing Claim object in nova/compute/claims.py but generalized
>> somewhat to other resources as well.
>>
>> The caller could then boot each instance separately (passing the
>> appropriate reservation/claim along with the boot request).  Because the
>> caller has a reservation the core code would know it doesn't need to
>> schedule or allocate resources, that's already been done.
>>
>> The advantage of this is that the scheduling and resource allocation is
>> done separately from the instantiation.  The instantiation API could remain
>> basically as-is except for support

Re: [openstack-dev] [Nova][Glance]supporting of v1 and v2 glance APIs in Nova

2014-02-11 Thread Eddie Sheffield
> A few days ago, I met some problems when using 'createimage' feature in
> Nova, we found that using V1 of glanceclient has some problem with
> processing of metadata, the version number and even the glance URIs are
> both hardcoded in Nova.
> 
> then, we found the bluepring[1] proposed, and the maillist[2] which talked
> about the topic before, mainly focused on version autodiscovery by keystone
> catalog and config option for nova. But we still need changes in Nova
> because the incompatible behavior between v1 and v2, especially when
> creating and uploading an image file. The review request[3] of the bp is
> abandoned for now.
> 
> So, what I want to confirm is, how could this situation be handled? I
> mailed Eddie Sheffield, but got no answer, so bring it up here.
> 
> [1]: https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api
> [2]: http://markmail.org/message/uqrpufsmh4qp5pgy
> [4]: https://review.openstack.org/#/c/38414/

Hi Lingxian,

I'm afraid I somehow didn't see the email you sent to me directly. We recently 
held the Glance Mini-summit and this work was discussed. Andrew Laski from the 
Nova team was also in attendance and provided some input from their 
perspective. Essentially we decided that while autodiscovery is desirable, we 
want to roll that functionality into a much-improved python-glanceclient which 
will present a version-agnostic programming api to users of the library. So the 
immediate plan is to go back to the approach outlined in the bp and merge prop 
you reference above. Then in the near future produce the new glanceclient 
followed by updating Nova to use the new library which will address the 
concerns of autodiscovery among other things.

Timeline-wise, I've been a bit covered up with some other work but will be 
getting back to this within a week. There were some concerns about the size of 
the patch so rather than unabandoning the existing one I will be trying to put 
up multiple, smaller patches.

Please let me know if you have any specific concerns or requirements so they 
can be addressed.

Eddie Sheffield
Rackspace Hosting, Inc.
eddie.sheffi...@rackspace.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral]

2014-02-11 Thread Dmitri Zimine
Yes it makes sense, let's draft how it may look;

and also think over implementation implications - now we separate task 
parameters, action parameters, and service parameters, we may need to merge 
them when instantiating the action. 

DZ. 

On Feb 11, 2014, at 6:19 AM, Renat Akhmerov  wrote:

> Dmitry, I think you are right here. I think for simple case we should be able 
> to use in-place action definition without having to define the action 
> separately. Like you said it’s only valuable if we need to reuse it.
> 
> The only difference I see between std:send-email and something like REST_API 
> is that a set of parameters for the latter is dynamic (versus std:send-email 
> where it’s always “recipients”, “subject”, “body”). Even though it’s still 
> the same protocol (HTTP) but a particular request representation may be 
> different (i.e. query string, headers, the structure of body in case POST 
> etc.). But I think that doesn’t cancel the idea of being able to define the 
> action along with the task itself.
> 
> So good point. As for the syntax itself, we need to think it over. In the 
> snippet you provided “action: std:REST_API”, so we need to make sure not to 
> have ambiguities in the ways how we can refer actions. A convention could be 
> “if we don’t use a namespace we assume that there’s a separate action 
> definition included into the same workbook, otherwise it should be considered 
> in-place action definition and task property “action” refers to an action 
> type rather than the action itself”. Does that make sense?

> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> On 11 Feb 2014, at 16:23, Dmitri Zimine  wrote:
> 
>> Do we have (or think about) a shorthand to calling REST_API action, without 
>> defining a service? 
>> 
>> FULL  DSL:
>> 
>> Services:
>> TimeService:
>>  type: REST_API
>>  parameters:
>>baseUrl:http://api.timezonedb.com
>>key:
>>  actions:
>>get-time:
>>  task-parameters:
>>zone:
>> Workflow:
>>  tasks:
>> timeInToronto:
>>action: TimeService:get-time
>>parameters:
>>  zone: "America/Toronto"
>> 
>> SHORTCUT - may look something like this: 
>> 
>> Workflow:
>>  tasks:
>>  timeInToronto:
>>  action:std:REST_API
>>  parameters:
>>baseUrl: "http://api.timezonedb.com";
>>method: "GET"
>>parameters: "zone=/America/Toronto&key="
>>
>> Why asking:  
>> 
>> 1) analogy with std:send-email action. I wonder do we have to make user 
>> define Service for std:send-email? and I think that for standard tasks we 
>> shouldn't have to. If there is any thinking on REST_API, it may apply here. 
>> 
>> 2) For a one-off web service calls the complete syntax is may be overkill 
>> (but yes, it comes handy for reuse). See examples below. 
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Sylvain Bauza

Le 11/02/2014 17:23, Yathiraj Udupi (yudupi) a écrit :

Hi Dina,

Thanks for note about Climate logic.  This is something that will be 
very useful, when we will have to schedule from Nova multiple 
instances (of potentially different flavors) as a single request.  If 
the Solver Scheduler, can make a request to the Climate service to 
reserve the resources soon after the placement decision has been made, 
then the nova provisioning logic can handle the resource provisioning 
using the climate reserved leases.  Regarding Solver Scheduler for 
your reference, just sent another email about this with some pointers 
about it.  Otherwise this is the blueprint - 
https://blueprints.launchpad.net/nova/+spec/solver-scheduler
I guess this is something to explore more and see how Nova 
provisioning logic to work with Climate leases. Or this is something 
that already works.  I need to find out more about Climate.


Thanks,
Yathi.




There are possibly 2 ways for creating a lease : either thru the CLI or 
by the python binding.


We implemented these 2 possibilities within the current Climate 0.1 
release :
 - a Nova extension plugin is responsible for creating the lease if a 
VM should be reserved (using the Climate pythonclient binding)
 - an user can request for reserving a compute host using the Climate 
python client directly


Both logics (VM and compute host) are actually referring to 2 distinct 
plugins in the Climate manager, so the actions are completely different.


Based on your use-case, you could imagine a call from the 
SolverScheduler to Climate for creating a lease containing multiple VM 
reservations, and either you would use the Climate VM plugin or you 
would use a dedicated plugin if your need is different.


I don't think that's a huge volume of work, as Climate already defines 
and implements the main features that you need.


-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][TripleO] NIC bonding for OpenStack

2014-02-11 Thread Andrey Danin
Hi Openstackers,

We are working on link aggregation support in Fuel. We wonder what are the
most desirable types of bonding now in datacenters. We had some issues (see
below) with OVS bond in LACP mode, and it turned out that standard Linux
bonding (attached to OVS bridges) was a better option in our setup.

I want to hear your opinion, guys. What types of bonding do you think are
better now in terms of stability and performance, so that we can properly
support them for OpenStack installations.

Also, we are wondering if there any plans to support bonding in TripleO,
and how you guys would like to see it be implemented? What is the general
approach for such complex network configurations for TripleO? We would love
to extract this piece from Fuel and make it fully independent, so that the
larger community can use it and we could work collaboratively on it. Right
now it is actually already granular and can be reused in other projects,
and implemented as a separated puppet module:
https://github.com/stackforge/fuel-library/tree/master/deployment/puppet/l23network
.

Some links with our design considerations:

https://etherpad.openstack.org/p/fuel-bonding-design

https://blueprints.launchpad.net/fuel/+spec/nics-bonding-enabled-from-ui



UI mockups:

https://drive.google.com/file/d/0Bw6txZ1qvn9CaDdJS0ZUcW1DeDg/edit?usp=sharing

Description of the problem with LACP we ran into:
https://etherpad.openstack.org/p/LACP_issue

Thanks,


-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Yathiraj Udupi (yudupi)
Hi Dina,

Thanks for note about Climate logic.  This is something that will be very 
useful, when we will have to schedule from Nova multiple instances (of 
potentially different flavors) as a single request.  If the Solver Scheduler, 
can make a request to the Climate service to reserve the resources soon after 
the placement decision has been made, then the nova provisioning logic can 
handle the resource provisioning using the climate reserved leases.  Regarding 
Solver Scheduler for your reference, just sent another email about this with 
some pointers about it.  Otherwise this is the blueprint - 
https://blueprints.launchpad.net/nova/+spec/solver-scheduler
I guess this is something to explore more and see how Nova provisioning logic 
to work with Climate leases. Or this is something that already works.  I need 
to find out more about Climate.

Thanks,
Yathi.


On 2/11/14, 7:44 AM, "Dina Belova" 
mailto:dbel...@mirantis.com>> wrote:

Like a restaurant reservation, it would "claim" the resources for use by 
someone at a later date.  That way nobody else can use them.
That way the scheduler would be responsible for determining where the resource 
should be allocated from, and getting a reservation for that resource.  It 
would not have anything to do with actually instantiating the 
instance/volume/etc.

Although I'm quite new to topic of Solver Scheduler, it seems to me that in 
that case you need to look on Climate project. It aims to provide resource 
reservation to OS clouds (and by resource I mean here instance/compute 
host/volume/etc.)

And Climate logic is like: create lease - get resources from common pool - do 
smth with them when lease start time will come.

I'll say one more time - I'm not really common with this discussion, but it 
looks like Climate might help here.

Thanks
Dina


On Tue, Feb 11, 2014 at 7:09 PM, Chris Friesen 
mailto:chris.frie...@windriver.com>> wrote:
On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:
Second, there is nothing wrong with booting the instances (or
instantiating other
resources) as separate commands as long as we support some kind of
reservation token.

I'm not sure what reservation token would do, is it some kind of informing
the scheduler that the resources would not be initiated until later ?

Like a restaurant reservation, it would "claim" the resources for use by 
someone at a later date.  That way nobody else can use them.

That way the scheduler would be responsible for determining where the resource 
should be allocated from, and getting a reservation for that resource.  It 
would not have anything to do with actually instantiating the 
instance/volume/etc.


Let's consider a following example:

A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
left, and another with 30 GB RAM left, using Filter Scheduler's default
RamWeigher.

If we pass the demand as two commands, there is a chance that the small VM
arrives first. RamWeigher will put it in the 50 GB RAM host, which will be
reduced to 30 GB RAM. Then, when the big VM request arrives, there will be
no space left to host it. As a result, the whole demand is failed.

Now if we can pass the two VMs in a command, SolverScheduler can put their
constraints all together into one big LP as follow (x_uv = 1 if VM u is
hosted in host v, 0 if not):

Yes.  So what I'm suggesting is that we schedule the two VMs as one call to the 
SolverScheduler.  The scheduler then gets reservations for the necessary 
resources and returns them to the caller.  This would be sort of like the 
existing Claim object in nova/compute/claims.py but generalized somewhat to 
other resources as well.

The caller could then boot each instance separately (passing the appropriate 
reservation/claim along with the boot request).  Because the caller has a 
reservation the core code would know it doesn't need to schedule or allocate 
resources, that's already been done.

The advantage of this is that the scheduling and resource allocation is done 
separately from the instantiation.  The instantiation API could remain 
basically as-is except for supporting an optional reservation token.


That responses to your first point, too. If we don't mind that some VMs
are placed and some are not (e.g. they belong to different apps), then
it's OK to pass them to the scheduler without Instance Group. However, if
the VMs are together (belong to an app), then we have to put them into an
Instance Group.

When I think of an "Instance Group", I think of 
"https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension";.   
Fundamentally Instance Groups" describes a runtime relationship between 
different instances.

The scheduler doesn't necessarily care about a runtime relationship, it's just 
trying to allocate resources efficiently.

In the above example, there is no need for those two instances to necessarily 
be part of an Instance Group--we just want to 

Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-11 Thread Clint Byrum
Excerpts from Steven Dake's message of 2014-02-11 07:19:19 -0800:
> On 02/10/2014 10:22 PM, Clint Byrum wrote:
> > Hi, so in the previous thread about rolling updates it became clear that
> > having in-instance control over updates is a more fundamental idea than
> > I had previously believed. During an update, Heat does things to servers
> > that may interrupt the server's purpose, and that may cause it to fail
> > subsequent things in the graph.
> >
> > Specifically, in TripleO we have compute nodes that we are managing.
> > Before rebooting a machine, we want to have a chance to live-migrate
> > workloads if possible, or evacuate in the simpler case, before the node
> > is rebooted. Also in the case of a Galera DB where we may even be running
> > degraded, we want to ensure that we have quorum before proceeding.
> >
> > I've filed a blueprint for this functionality:
> >
> > https://blueprints.launchpad.net/heat/+spec/update-hooks
> >
> > I've cobbled together a spec here, and I would very much welcome
> > edits/comments/etc:
> >
> > https://etherpad.openstack.org/p/heat-update-hooks
> Clint,
> 
> I read through your etherpad and think there is a relationship to a use 
> case for scaling.  It would be sweet if both these cases could use the 
> same model and tooling.  At the moment in an autoscaling group, when you 
> want to scale "down" there is no way to quiesce the node before killing 
> the VM.  It is the same problem you have with Galera, except your 
> scenario involves an update.
> 
> I'm not clear how the proposed design could be made to fit this 
> particular use case.  Do you see a way it can fill both roles so we 
> don't have two different ways to do essentially the same thing?
> 

I see scaling down as an update to a nested stack which contains all of
the members of the scaling group.

So if we start focusing on scaling stacks, as Zane suggested in
the rolling updates thread, then the author of said stack would add
action_hooks to the resources that are scaled.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Yathiraj Udupi (yudupi)
I thought of adding some more points about the Solver Scheduler to this
conversation.
Think of SolverScheduler as a placement decision engine, which gives an
optimal solution for the specified request based on the current
information available at a specific time.  The request could potentially
be a set of instances of the same flavor or different flavors (assuming we
eventually support scheduler APIs that can provide this).
Once the optimal placement decision is known, then we need to allocate the
resources. Currently Nova supports the final allocation of resources
(resource provisioning) one at a time.  I definitely agree there will be
more success in allocating all the requested instances, if there is a
support for reserving the spots as soon as a placement decision is taken
by the SolverScheduler. This is something to explore to add in Nova, using
a local service or external service (need to explore Climate).

If the required set of instances is known, irrespective of whether it is
part of one instance group, or multiple instance groups, you always get a
more optimal solution, if all of this is requested in one shot to the
solver scheduler, as the constraint solving happens in one shot, letting
you know if the entire set of instances is feasible to be placed given the
existing resource capacity.
If it is okay to support partial instantiation of subset of instances,
then it makes sense to provide support to retry one instance group at a
time, when the entire request is not feasible.
To add another point about instance group api implementation for icehouse,
it was decided in the HongKong summit to initially support only flat
instance groups without nesting. Hence if an application requires a big
topology of instances, they can easily belong to multiple instance groups,
and hence if you want the entire application requirement to be satisfied,
the entire set of instances from multiple flat instance groups should be
requested as a single shot to the solver scheduler.  Also, there is
additional work required to add new scheduler APIs to support requesting
instance groups of multiple flavors.

I think I have reiterated some of the points what Chris has mentioned
below.  But Yes, like I had stated earlier in this thread, we need to
separate the decision making phase from the initial request making, and
the final allocation provisioning (or orchestration).  In these phases, a
reservation phase, after the decision making,  will add additional
guarantees to allocate the placed instances.

Thanks,
Yathi. 



On 2/11/14, 7:09 AM, "Chris Friesen"  wrote:

>On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:
>>> Second, there is nothing wrong with booting the instances (or
>> instantiating other
>>> resources) as separate commands as long as we support some kind of
>>> reservation token.
>>
>> I'm not sure what reservation token would do, is it some kind of
>>informing
>> the scheduler that the resources would not be initiated until later ?
>
>Like a restaurant reservation, it would "claim" the resources for use by
>someone at a later date.  That way nobody else can use them.
>
>That way the scheduler would be responsible for determining where the
>resource should be allocated from, and getting a reservation for that
>resource.  It would not have anything to do with actually instantiating
>the instance/volume/etc.
>
>> Let's consider a following example:
>>
>> A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
>> with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
>> left, and another with 30 GB RAM left, using Filter Scheduler's default
>> RamWeigher.
>>
>> If we pass the demand as two commands, there is a chance that the small
>>VM
>> arrives first. RamWeigher will put it in the 50 GB RAM host, which will
>>be
>> reduced to 30 GB RAM. Then, when the big VM request arrives, there will
>>be
>> no space left to host it. As a result, the whole demand is failed.
>>
>> Now if we can pass the two VMs in a command, SolverScheduler can put
>>their
>> constraints all together into one big LP as follow (x_uv = 1 if VM u is
>> hosted in host v, 0 if not):
>
>Yes.  So what I'm suggesting is that we schedule the two VMs as one call
>to the SolverScheduler.  The scheduler then gets reservations for the
>necessary resources and returns them to the caller.  This would be sort
>of like the existing Claim object in nova/compute/claims.py but
>generalized somewhat to other resources as well.
>
>The caller could then boot each instance separately (passing the
>appropriate reservation/claim along with the boot request).  Because the
>caller has a reservation the core code would know it doesn't need to
>schedule or allocate resources, that's already been done.
>
>The advantage of this is that the scheduling and resource allocation is
>done separately from the instantiation.  The instantiation API could
>remain basically as-is except for supporting an optional reservation
>token.
>
>> That responses to y

Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Khanh-Toan Tran
Thanks, I will look closely at it.



De : Dina Belova [mailto:dbel...@mirantis.com]
Envoyé : mardi 11 février 2014 16:45
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and
Solver Scheduler



Like a restaurant reservation, it would "claim" the resources for use by
someone at a later date.  That way nobody else can use them.
That way the scheduler would be responsible for determining where the
resource should be allocated from, and getting a reservation for that
resource.  It would not have anything to do with actually instantiating
the instance/volume/etc.



Although I'm quite new to topic of Solver Scheduler, it seems to me that
in that case you need to look on Climate project. It aims to provide
resource reservation to OS clouds (and by resource I mean here
instance/compute host/volume/etc.)



And Climate logic is like: create lease - get resources from common pool -
do smth with them when lease start time will come.



I'll say one more time - I'm not really common with this discussion, but
it looks like Climate might help here.



Thanks

Dina



On Tue, Feb 11, 2014 at 7:09 PM, Chris Friesen
 wrote:

On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:

Second, there is nothing wrong with booting the instances (or

instantiating other

resources) as separate commands as long as we support some kind of
reservation token.


I'm not sure what reservation token would do, is it some kind of informing
the scheduler that the resources would not be initiated until later ?



Like a restaurant reservation, it would "claim" the resources for use by
someone at a later date.  That way nobody else can use them.

That way the scheduler would be responsible for determining where the
resource should be allocated from, and getting a reservation for that
resource.  It would not have anything to do with actually instantiating
the instance/volume/etc.



Let's consider a following example:

A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
left, and another with 30 GB RAM left, using Filter Scheduler's default
RamWeigher.

If we pass the demand as two commands, there is a chance that the small VM
arrives first. RamWeigher will put it in the 50 GB RAM host, which will be
reduced to 30 GB RAM. Then, when the big VM request arrives, there will be
no space left to host it. As a result, the whole demand is failed.

Now if we can pass the two VMs in a command, SolverScheduler can put their
constraints all together into one big LP as follow (x_uv = 1 if VM u is
hosted in host v, 0 if not):



Yes.  So what I'm suggesting is that we schedule the two VMs as one call
to the SolverScheduler.  The scheduler then gets reservations for the
necessary resources and returns them to the caller.  This would be sort of
like the existing Claim object in nova/compute/claims.py but generalized
somewhat to other resources as well.

The caller could then boot each instance separately (passing the
appropriate reservation/claim along with the boot request).  Because the
caller has a reservation the core code would know it doesn't need to
schedule or allocate resources, that's already been done.

The advantage of this is that the scheduling and resource allocation is
done separately from the instantiation.  The instantiation API could
remain basically as-is except for supporting an optional reservation
token.



That responses to your first point, too. If we don't mind that some VMs
are placed and some are not (e.g. they belong to different apps), then
it's OK to pass them to the scheduler without Instance Group. However, if
the VMs are together (belong to an app), then we have to put them into an
Instance Group.



When I think of an "Instance Group", I think of
"https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension";
.   Fundamentally Instance Groups" describes a runtime relationship
between different instances.

The scheduler doesn't necessarily care about a runtime relationship, it's
just trying to allocate resources efficiently.

In the above example, there is no need for those two instances to
necessarily be part of an Instance Group--we just want to schedule them
both at the same time to give the scheduler a better chance of fitting
them both.

More generally, the more instances I want to start up the more beneficial
it can be to pass them all to the scheduler at once in order to give the
scheduler more information.  Those instances could be parts of completely
independent Instance Groups, or not part of an Instance Group at all...the
scheduler can still do a better job if it has more information to work
with.



Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







--

Best regards,

Dina Belova

Software Engineer

[openstack-dev] Hyper-v meeting cancelled.

2014-02-11 Thread Peter Pouliot
Hi all.
We are currently working to bring additional compute power to the hyper-v ci 
infrastructure.
Because this task currently takes precedence I need to cancel the meeting for 
today.

P

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Dina Belova
>
> Like a restaurant reservation, it would "claim" the resources for use by
> someone at a later date.  That way nobody else can use them.
> That way the scheduler would be responsible for determining where the
> resource should be allocated from, and getting a reservation for that
> resource.  It would not have anything to do with actually instantiating the
> instance/volume/etc.


Although I'm quite new to topic of Solver Scheduler, it seems to me that in
that case you need to look on Climate project. It aims to provide resource
reservation to OS clouds (and by resource I mean here instance/compute
host/volume/etc.)

And Climate logic is like: create lease - get resources from common pool -
do smth with them when lease start time will come.

I'll say one more time - I'm not really common with this discussion, but it
looks like Climate might help here.

Thanks
Dina


On Tue, Feb 11, 2014 at 7:09 PM, Chris Friesen
wrote:

> On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:
>
>> Second, there is nothing wrong with booting the instances (or
>>>
>> instantiating other
>>
>>> resources) as separate commands as long as we support some kind of
>>> reservation token.
>>>
>>
>> I'm not sure what reservation token would do, is it some kind of informing
>> the scheduler that the resources would not be initiated until later ?
>>
>
> Like a restaurant reservation, it would "claim" the resources for use by
> someone at a later date.  That way nobody else can use them.
>
> That way the scheduler would be responsible for determining where the
> resource should be allocated from, and getting a reservation for that
> resource.  It would not have anything to do with actually instantiating the
> instance/volume/etc.
>
>
>  Let's consider a following example:
>>
>> A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
>> with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
>> left, and another with 30 GB RAM left, using Filter Scheduler's default
>> RamWeigher.
>>
>> If we pass the demand as two commands, there is a chance that the small VM
>> arrives first. RamWeigher will put it in the 50 GB RAM host, which will be
>> reduced to 30 GB RAM. Then, when the big VM request arrives, there will be
>> no space left to host it. As a result, the whole demand is failed.
>>
>> Now if we can pass the two VMs in a command, SolverScheduler can put their
>> constraints all together into one big LP as follow (x_uv = 1 if VM u is
>> hosted in host v, 0 if not):
>>
>
> Yes.  So what I'm suggesting is that we schedule the two VMs as one call
> to the SolverScheduler.  The scheduler then gets reservations for the
> necessary resources and returns them to the caller.  This would be sort of
> like the existing Claim object in nova/compute/claims.py but generalized
> somewhat to other resources as well.
>
> The caller could then boot each instance separately (passing the
> appropriate reservation/claim along with the boot request).  Because the
> caller has a reservation the core code would know it doesn't need to
> schedule or allocate resources, that's already been done.
>
> The advantage of this is that the scheduling and resource allocation is
> done separately from the instantiation.  The instantiation API could remain
> basically as-is except for supporting an optional reservation token.
>
>
>  That responses to your first point, too. If we don't mind that some VMs
>> are placed and some are not (e.g. they belong to different apps), then
>> it's OK to pass them to the scheduler without Instance Group. However, if
>> the VMs are together (belong to an app), then we have to put them into an
>> Instance Group.
>>
>
> When I think of an "Instance Group", I think of "
> https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension";.
>   Fundamentally Instance Groups" describes a runtime relationship between
> different instances.
>
> The scheduler doesn't necessarily care about a runtime relationship, it's
> just trying to allocate resources efficiently.
>
> In the above example, there is no need for those two instances to
> necessarily be part of an Instance Group--we just want to schedule them
> both at the same time to give the scheduler a better chance of fitting them
> both.
>
> More generally, the more instances I want to start up the more beneficial
> it can be to pass them all to the scheduler at once in order to give the
> scheduler more information.  Those instances could be parts of completely
> independent Instance Groups, or not part of an Instance Group at all...the
> scheduler can still do a better job if it has more information to work with.
>
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenS

[openstack-dev] [Murano] Community meeting agenda - 02/11/2014

2014-02-11 Thread Alexander Tivelkov
Hi,

This is just a reminder that we are going to have a weekly meeting of
Murano team in IRC (#openstack-meeting-alt) today at 17:00 UTC (9am PST) .

The agenda can be found here:
https://wiki.openstack.org/wiki/Meetings/MuranoAgenda#Agenda

Feel free to add anything you want to discuss.

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Sylvain Bauza

Le 11/02/2014 16:09, Chris Friesen a écrit :


Yes.  So what I'm suggesting is that we schedule the two VMs as one 
call to the SolverScheduler.  The scheduler then gets reservations for 
the necessary resources and returns them to the caller.  This would be 
sort of like the existing Claim object in nova/compute/claims.py but 
generalized somewhat to other resources as well.


The caller could then boot each instance separately (passing the 
appropriate reservation/claim along with the boot request). Because 
the caller has a reservation the core code would know it doesn't need 
to schedule or allocate resources, that's already been done.


The advantage of this is that the scheduling and resource allocation 
is done separately from the instantiation.  The instantiation API 
could remain basically as-is except for supporting an optional 
reservation token.


I think you really need to look at what Climate is, but I don't want to 
be boring...


Climate can provide you some way for doing the reservation, and possibly 
you would only need to write a plugin for such a request.


-Sylvain


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-11 Thread Sumit Naiksatam
+1

Sumit.
On Feb 10, 2014 3:33 PM, "Mark McClain"  wrote:

> All-
>
> I'd like to nominate Oleg Bondarev to become a Neutron core reviewer.
>  Oleg has been valuable contributor to Neutron by actively reviewing,
> working on bugs, and contributing code.
>
> Neutron cores please reply back with +1/0/-1 votes.
>
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-11 Thread Steven Dake

On 02/10/2014 10:22 PM, Clint Byrum wrote:

Hi, so in the previous thread about rolling updates it became clear that
having in-instance control over updates is a more fundamental idea than
I had previously believed. During an update, Heat does things to servers
that may interrupt the server's purpose, and that may cause it to fail
subsequent things in the graph.

Specifically, in TripleO we have compute nodes that we are managing.
Before rebooting a machine, we want to have a chance to live-migrate
workloads if possible, or evacuate in the simpler case, before the node
is rebooted. Also in the case of a Galera DB where we may even be running
degraded, we want to ensure that we have quorum before proceeding.

I've filed a blueprint for this functionality:

https://blueprints.launchpad.net/heat/+spec/update-hooks

I've cobbled together a spec here, and I would very much welcome
edits/comments/etc:

https://etherpad.openstack.org/p/heat-update-hooks

Clint,

I read through your etherpad and think there is a relationship to a use 
case for scaling.  It would be sweet if both these cases could use the 
same model and tooling.  At the moment in an autoscaling group, when you 
want to scale "down" there is no way to quiesce the node before killing 
the VM.  It is the same problem you have with Galera, except your 
scenario involves an update.


I'm not clear how the proposed design could be made to fit this 
particular use case.  Do you see a way it can fill both roles so we 
don't have two different ways to do essentially the same thing?


Regards
-steve


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?

2014-02-11 Thread Anita Kuno
On 02/11/2014 04:57 AM, Alan Pevec wrote:
> Hi Mark and Anita,
> 
> could we declare stable/havana neutron gate jobs good enough at this point?
> There are still random failures as this no-op change shows
> https://review.openstack.org/72576
> but I don't think they're stable/havana specific.
> 
>>> Do we have a list of those somewhere?
>> Pulled out where following Neutron patches (IMHO all innocent for gate
>> breaking):
>> https://review.openstack.org/62206
>> https://review.openstack.org/67214
>> https://review.openstack.org/70232
> 
> I've resubmitted those without "Removed from the gate..." lines in the
> commit message, waiting for rechecks now.
> 
>>> I'm particularly interested in https://review.openstack.org/#/c/66149/ as a 
>>> fix for
>>> https://bugs.launchpad.net/keystone/+bug/1251123
> 
> This one is last remaining exception request for 2013.2.2 and is
> waiting for the master change to be reviewed:
> https://review.openstack.org/#/q/Ida39b4699ed6c568609a5121573fc3be5c4ab2f4,n,z
> I hope keystone core could review this one quickly so that backport
> can be updated and merged.
> 
> Thanks!
> Alan
> 
I will reaffirm here what I had stated in IRC.

If Mark McClain gives his assent for stable/havana patches to be
approved, I will not remove Neutron stable/havana patches from the gate
queue before they start running tests. If after they start running
tests, they demonstrate that they are failing, I will remove them from
the gate as a means to keep the gate flowing. If the stable/havana gate
jobs are indeed stable, I will not be removing any patches that should
be merged.

Adding commit lines is the fastest way to submit a new patchset without
affecting the code in the patch (hence removing it from the gate queue),
so thank you for removing those additional lines in the commit message.

Thank you,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Chris Friesen

On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:

Second, there is nothing wrong with booting the instances (or

instantiating other

resources) as separate commands as long as we support some kind of
reservation token.


I'm not sure what reservation token would do, is it some kind of informing
the scheduler that the resources would not be initiated until later ?


Like a restaurant reservation, it would "claim" the resources for use by 
someone at a later date.  That way nobody else can use them.


That way the scheduler would be responsible for determining where the 
resource should be allocated from, and getting a reservation for that 
resource.  It would not have anything to do with actually instantiating 
the instance/volume/etc.



Let's consider a following example:

A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
left, and another with 30 GB RAM left, using Filter Scheduler's default
RamWeigher.

If we pass the demand as two commands, there is a chance that the small VM
arrives first. RamWeigher will put it in the 50 GB RAM host, which will be
reduced to 30 GB RAM. Then, when the big VM request arrives, there will be
no space left to host it. As a result, the whole demand is failed.

Now if we can pass the two VMs in a command, SolverScheduler can put their
constraints all together into one big LP as follow (x_uv = 1 if VM u is
hosted in host v, 0 if not):


Yes.  So what I'm suggesting is that we schedule the two VMs as one call 
to the SolverScheduler.  The scheduler then gets reservations for the 
necessary resources and returns them to the caller.  This would be sort 
of like the existing Claim object in nova/compute/claims.py but 
generalized somewhat to other resources as well.


The caller could then boot each instance separately (passing the 
appropriate reservation/claim along with the boot request).  Because the 
caller has a reservation the core code would know it doesn't need to 
schedule or allocate resources, that's already been done.


The advantage of this is that the scheduling and resource allocation is 
done separately from the instantiation.  The instantiation API could 
remain basically as-is except for supporting an optional reservation token.



That responses to your first point, too. If we don't mind that some VMs
are placed and some are not (e.g. they belong to different apps), then
it's OK to pass them to the scheduler without Instance Group. However, if
the VMs are together (belong to an app), then we have to put them into an
Instance Group.


When I think of an "Instance Group", I think of 
"https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension";. 
  Fundamentally Instance Groups" describes a runtime relationship 
between different instances.


The scheduler doesn't necessarily care about a runtime relationship, 
it's just trying to allocate resources efficiently.


In the above example, there is no need for those two instances to 
necessarily be part of an Instance Group--we just want to schedule them 
both at the same time to give the scheduler a better chance of fitting 
them both.


More generally, the more instances I want to start up the more 
beneficial it can be to pass them all to the scheduler at once in order 
to give the scheduler more information.  Those instances could be parts 
of completely independent Instance Groups, or not part of an Instance 
Group at all...the scheduler can still do a better job if it has more 
information to work with.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

2014-02-11 Thread Serg Melikyan
Doug, thank you for your response!

With moving to *oslo.messaging *I think we need to carefully
re-evaluate our communication patterns and try to either fit them to
RPC/Notification, or work on extending *oslo.messaging *with some new ideas.
Adopting a new library is always a challenge.

>The RPC modules expect a response and provide timeout behavior, but the
notifications don't require that. Perhaps you could send those messages as
notifications?
Yes, notifications can be used as replacement to sending messages, but I
think this may be even broken design than try to replace messaging with RPC
without actual communications re-design.

>The library is meant to be reusable, so if the API does not support your
use case please work with us to extend it rather than hacking around it or
forking it
Sure, I have no intents to hack/change/extend/fork anything related to
oslo.messaging without consulting with oslo team first. I understand that
making tools used by OpenStack better is very important to all of us.

When talked about 'hacking' I had in mind a quick way to move to
*oslo.messaging* without losing any of existing functionality. Since
oslo.messaging meant to hide away underlying implementation and message
queue related things I am not sure that adding such specific thing as
queue-ttl for selected queues without extending some classes from driver
layer is possible.



On Tue, Feb 11, 2014 at 6:06 PM, Doug Hellmann
wrote:

>
>
>
> On Tue, Feb 11, 2014 at 8:05 AM, Serg Melikyan wrote:
>
>> oslo.messaging  is a library
>> that provides RPC and Notifications API, they are part of the same library
>> for mostly historical reasons. One of the major goals of *oslo.messaging* is
>> to provide clean RPC and Notification API without any trace of messaging
>> queue concepts (but two of most advanced drivers used by oslo.messaging is
>> actually based on AMQP: RabbitMQ and QPID).
>>
>> We were designing Murano on messaging queue concepts using some
>> AMQP/RabbitMQ specific features, like queue TTL. Since we never considered
>> communications between our components in terms of RPC or Notifications and
>> always thought about them as message exchange through broker it has
>> influenced our components architecture. In Murano we use simple 
>> wrapper
>>  around Puka  (RabbitMQ client with most
>> simple and thoughtful async model) that is used in all our components. We 
>> forked
>> Puka  since we had specific
>> requirements to SSL and could not yet merge our 
>> work back
>> to master.
>>
>> Can we abandon our own 
>> wrapperaround
>>  our own
>> fork of Puka  in favor of
>> *oslo.messaging*? *Yes*, but this migration may be tricky. I believe we
>> can migrate to *oslo.messaging* in a week or so*.*
>>
>> I had played with *oslo.messaging* emulating our current communication
>> patterns with *oslo.messaging*, and I am certain that current
>> implementation can be migrated to *oslo.messaging**. *But I am not sure
>> that *oslo.messaging* may be easily suited to all future use-cases that
>> we plan to cover in a few next releases without major contributions.
>> Please, try to respond with any questions related to *oslo.messaging* 
>> implementation
>> and how it can be fitted with certain use-case.
>>
>> Below, I tried to describe our current use-cases and what specific MQ
>> features we are using, how they may be implemented with *oslo.messaging *and
>> with what limitations we will face.
>>
>> Use-Case
>> Murano has several components with communications between them based on
>> messaging queue:
>> *murano-api* -> *murano-conductor:*
>>
>>1. *murano-api* sends deployment tasks to murano-conductor
>>
>> *murano-conductor* -> *murano-api:*
>>
>>1. *murano-conductor* reports to *murano-api* task progress
>>during processing
>>2. after processing, *murano-conductor* sends results to *murano-api*
>>
>> *murano-conductor *->* murano-agent:*
>>
>>1. during task processing *murano-conductor* sends execution plans
>>with commands to *murano-agent.*
>>
>> Note: each of mentioned components above may have more than one instance.
>>
>> One of great messaging queue specific that we heavily use is a idea of
>> queue itself, messages sent to component will be handled any time soon as
>> at least one instance would be started. For example, in case of
>> *murano-agent*, message is sent even before *murano-agent* is started.
>> Another one is queue life-time, we control life-time of *murano-agent*queues 
>> to exclude overflow of MQ server with queues that is not used
>> anymore.
>>
>> One thing is also worse to mention: *murano-conductor* communicates with
>> several compone

Re: [openstack-dev] [Mistral]

2014-02-11 Thread Renat Akhmerov
Dmitry, I think you are right here. I think for simple case we should be able 
to use in-place action definition without having to define the action 
separately. Like you said it’s only valuable if we need to reuse it.

The only difference I see between std:send-email and something like REST_API is 
that a set of parameters for the latter is dynamic (versus std:send-email where 
it’s always “recipients”, “subject”, “body”). Even though it’s still the same 
protocol (HTTP) but a particular request representation may be different (i.e. 
query string, headers, the structure of body in case POST etc.). But I think 
that doesn’t cancel the idea of being able to define the action along with the 
task itself.

So good point. As for the syntax itself, we need to think it over. In the 
snippet you provided “action: std:REST_API”, so we need to make sure not to 
have ambiguities in the ways how we can refer actions. A convention could be 
“if we don’t use a namespace we assume that there’s a separate action 
definition included into the same workbook, otherwise it should be considered 
in-place action definition and task property “action” refers to an action type 
rather than the action itself”. Does that make sense?

Renat Akhmerov
@ Mirantis Inc.

On 11 Feb 2014, at 16:23, Dmitri Zimine  wrote:

> Do we have (or think about) a shorthand to calling REST_API action, without 
> defining a service? 
> 
> FULL  DSL:
> 
> Services:
>  TimeService:
>   type: REST_API
>   parameters:
> baseUrl:http://api.timezonedb.com
> key:
>   actions:
> get-time:
>   task-parameters:
> zone:
> Workflow:
>   tasks:
>  timeInToronto:
> action: TimeService:get-time
> parameters:
>   zone: "America/Toronto"
> 
> SHORTCUT - may look something like this: 
> 
> Workflow:
>   tasks:
>   timeInToronto:
>   action:std:REST_API
>   parameters:
> baseUrl: "http://api.timezonedb.com";
> method: "GET"
> parameters: "zone=/America/Toronto&key="
> 
> Why asking:  
> 
> 1) analogy with std:send-email action. I wonder do we have to make user 
> define Service for std:send-email? and I think that for standard tasks we 
> shouldn't have to. If there is any thinking on REST_API, it may apply here. 
> 
> 2) For a one-off web service calls the complete syntax is may be overkill 
> (but yes, it comes handy for reuse). See examples below. 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Glance]supporting of v1 and v2 glance APIs in Nova

2014-02-11 Thread Lingxian Kong
Greetings

A few days ago, I met some problems when using 'createimage' feature in
Nova, we found that using V1 of glanceclient has some problem with
processing of metadata, the version number and even the glance URIs are
both hardcoded in Nova.

then, we found the bluepring[1] proposed, and the maillist[2] which talked
about the topic before, mainly focused on version autodiscovery by keystone
catalog and config option for nova. But we still need changes in Nova
because the incompatible behavior between v1 and v2, especially when
creating and uploading an image file. The review request[3] of the bp is
abandoned for now.

So, what I want to confirm is, how could this situation be handled? I
mailed Eddie Sheffield, but got no answer, so bring it up here.

[1]: https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api
[2]: http://markmail.org/message/uqrpufsmh4qp5pgy
[4]: https://review.openstack.org/#/c/38414/

-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Can we migrate to oslo.messaging?

2014-02-11 Thread Doug Hellmann
On Tue, Feb 11, 2014 at 8:05 AM, Serg Melikyan wrote:

> oslo.messaging  is a library
> that provides RPC and Notifications API, they are part of the same library
> for mostly historical reasons. One of the major goals of *oslo.messaging* is
> to provide clean RPC and Notification API without any trace of messaging
> queue concepts (but two of most advanced drivers used by oslo.messaging is
> actually based on AMQP: RabbitMQ and QPID).
>
> We were designing Murano on messaging queue concepts using some
> AMQP/RabbitMQ specific features, like queue TTL. Since we never considered
> communications between our components in terms of RPC or Notifications and
> always thought about them as message exchange through broker it has
> influenced our components architecture. In Murano we use simple 
> wrapper
>  around Puka  (RabbitMQ client with most
> simple and thoughtful async model) that is used in all our components. We 
> forked
> Puka  since we had specific
> requirements to SSL and could not yet merge our 
> work back
> to master.
>
> Can we abandon our own 
> wrapperaround
>  our own
> fork of Puka  in favor of
> *oslo.messaging*? *Yes*, but this migration may be tricky. I believe we
> can migrate to *oslo.messaging* in a week or so*.*
>
> I had played with *oslo.messaging* emulating our current communication
> patterns with *oslo.messaging*, and I am certain that current
> implementation can be migrated to *oslo.messaging**. *But I am not sure
> that *oslo.messaging* may be easily suited to all future use-cases that
> we plan to cover in a few next releases without major contributions.
> Please, try to respond with any questions related to *oslo.messaging* 
> implementation
> and how it can be fitted with certain use-case.
>
> Below, I tried to describe our current use-cases and what specific MQ
> features we are using, how they may be implemented with *oslo.messaging *and
> with what limitations we will face.
>
> Use-Case
> Murano has several components with communications between them based on
> messaging queue:
> *murano-api* -> *murano-conductor:*
>
>1. *murano-api* sends deployment tasks to murano-conductor
>
> *murano-conductor* -> *murano-api:*
>
>1. *murano-conductor* reports to *murano-api* task progress
>during processing
>2. after processing, *murano-conductor* sends results to *murano-api*
>
> *murano-conductor *->* murano-agent:*
>
>1. during task processing *murano-conductor* sends execution plans
>with commands to *murano-agent.*
>
> Note: each of mentioned components above may have more than one instance.
>
> One of great messaging queue specific that we heavily use is a idea of
> queue itself, messages sent to component will be handled any time soon as
> at least one instance would be started. For example, in case of
> *murano-agent*, message is sent even before *murano-agent* is started.
> Another one is queue life-time, we control life-time of *murano-agent*queues 
> to exclude overflow of MQ server with queues that is not used
> anymore.
>
> One thing is also worse to mention: *murano-conductor* communicates with
> several components at the same time: process several tasks at the same
> time, during task processing *murano-conductor* sends progress
> notifications to *murano-api* and execution plans to *murano-agent*.
>
> Implementation
> Please, refer to 
> Concepts
>  section of *oslo.messaging* Wiki before further reading to grasp key
> concepts expressed in *oslo.messaging* library. In short, using RPC API
> we can 'call' server synchronously and receive some result, or 'cast'
> asynchronously (no result is returned). Using Notification API we can send
> Notification to the specified Target about happened event with specified
> event_type, importance and payload.
>
> If we move to *oslo.messaging* we can only primarily rely on features
> provided by RPC/Notifications model:
>
>1. We should not rely on message delivery without other side is
>properly up and running. It is not a message delivery, it is *Remote
>Procedure Call;*
>
> The RPC modules expect a response and provide timeout behavior, but the
notifications don't require that. Perhaps you could send those messages as
notifications?



>
>1.
>2. To control queue life-time as we do now, we may be required to
>'hack' *oslo.messaging* by writing own driver.
>
> The library is meant to be reusable, so if the API does not support your
use case please work with us to extend it rather than hacking around it or
forking it.



>
>1.
>
> *murano-api* -> *murano-conductor:*
>
>1. *murano-ap

Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Khanh-Toan Tran
> In that model, we would pass a bunch of information about multiple

> resources to the solver scheduler, have it perform scheduling *and

> reserve the resources*, then return some kind of resource reservation

> tokens back to the caller for each resource.  The caller could then

> allocate each resource, pass in the reservation token indicating both

> that the resources had already been reserved as well as what the
specific

> resource that had been reserved (the compute-host in the case of an

> instance, for example).



Here the same problem comes back as with Heat. You can tell Climate to
regroup some VMs together. However, the bottom line is that we have no way
to “pass a bunch of information about multiple

resources to the solver scheduler”, since the current scheduler API does
not allow it. It only accept a command of unique type of resources
(flavor) and immediately calculates a provisioning plan for this command.
Thus if we want to retains this process and wait for the integrity of all
information (by passing several commands, probably with reservation token
or a group ID as now) before calculating the provisioning plan, then we
have to change the design of the scheduler completely, making it stateful,
which, IMO, is much more complicated than adding a new API.



> Please be aware that Climate [1] already exists for managing resources

> reservations. That doesn't make sense and has been discussed during

> last summit that reservations should be managed by Nova, but rather

> by another service.



No argument here J



De : Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Envoyé : mardi 11 février 2014 10:39
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and
Solver Scheduler







2014-02-10 18:45 GMT+01:00 Chris Friesen :



In that model, we would pass a bunch of information about multiple
resources to the solver scheduler, have it perform scheduling *and reserve
the resources*, then return some kind of resource reservation tokens back
to the caller for each resource.  The caller could then allocate each
resource, pass in the reservation token indicating both that the resources
had already been reserved as well as what the specific resource that had
been reserved (the compute-host in the case of an instance, for example).

Chris





Please be aware that Climate [1] already exists for managing resources
reservations. That doesn't make sense and has been discussed during last
summit that reservations should be managed by Nova, but rather by another
service.



-Sylvain



[1] : https://launchpad.net/climate

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-dev Digest, Vol 22, Issue 27

2014-02-11 Thread Abishek Subramanian (absubram)
Hi Shixiong,

Thanks for the reply and clearing up my question!
The document you've shared - they aren't the only possible combinations
correct?
Each of the two fields can have any one of these four values in any
possible combination yes?

Thanks!

On 2/11/14 7:00 AM, "openstack-dev-requ...@lists.openstack.org"
 wrote:

>Hi, Abishek:
>
>Thank you for taking care of Horizon for IPv6 enhancement. So now we have
>coverage on both CLI and dashboard side. Very exciting!
>
>W.r.t your questions, these two parameters work independently. In other
>words, Horizon should present both options if the interested subnet is
>IPv6. For each parameter, the valid values are:
>   off
>   slacc
>   dhcpv6-stateful
>   dhcpv6-stateless
>
>The CLI command may look like, for example, something below:
>
>neutron subnet-create --ip-version 6 --ipv6_ra_mode off
>--ipv6_address_mode off NETWORK CIDR
>neutron subnet-create --ip-version 6 --ipv6_ra_mode off
>--ipv6_address_mode dhcpv6-stateful NETWORK CIDR
>neutron subnet-create --ip-version 6 --ipv6_ra_mode slaac
>--ipv6_address_mode slaac NETWORK CIDR
>neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateful
>--ipv6_address_mode off NETWORK CIDR
>neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateless
>--ipv6_address_mode dhcpv6-stateless NETWORK CIDR
>
>
>The valid combinations are outlined in the PDF file below.
>
>https://www.dropbox.com/s/9bojvv9vywsz8sd/IPv6%20Two%20Modes%20v3.0.pdf
>
>Please let me know if you have any further questions. Thanks!
>
>Shixiong
>
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Openstack] [GSoC] Call for Mentors and Participants

2014-02-11 Thread Davanum Srinivas
Hi all,

Apologies if you saw this already. Since we have a really tight
deadline to drum up mentor and student participation, forwarding this
once to the openstack-dev mailing list. The deadline is this Friday
the 14th.

thanks,
dims

On Sun, Feb 9, 2014 at 5:34 PM, Davanum Srinivas  wrote:
>
> Hi everyone,
>
> Anyone wishing to get involved, please flesh out the wiki with topics,
> ideas or by listing your name as a mentor or someone looking for a
> mentee. This is the most important part of our application, we need to
> show enough mentors and project proposals for OpenStack to be accepted
> as a participating Organization. Please don't forget to add your
> contact details, link to blueprints, reviews, anything that would help
> flesh out topics/projects/ideas.
>
> https://wiki.openstack.org/wiki/GSoC2014
>
> Anyone wishing to help fill up the application, Here's our etherpad
> for our Application for GSoC 2014. Many thanks to Anne Gentle for
> digging up our application for 2012.
>
> https://etherpad.openstack.org/p/gsoc2014orgapp
>
> Thanks,
> Dims.
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




--
-Debo~


-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPv6] NAT64 Discussion

2014-02-11 Thread Xuhan Peng
In previous Neutron IPv6 team meeting, we discussed the requirement of
providing NAT64 in OpenStack to facilitate the communication between IPv6
and IPv4 hosts. For example, from tenant IPv6 network to external
traditional IPv4 network. I was suggested to initiate a discussion in ML to
gather all forks thoughts.

I wonder if this is a fairly common requirement and what is the good way to
add this possibility to OpenStack. Several ways we mentioned in previous
sub-team meeting:

1. Add to current Neutron L3
This was also asked by Martinx in [1].
2. Run as service VM
3. Maybe even a new sub-project/project dedicated for address translation

I also want to mention that NAT64 may not be the only requirement to
communicate between IPv6 and IPv4 hosts. 6in4 tunneling and other methods
are als candidates.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023265.html

Your comments are appreciated!

Xuhan Peng (irc: xuhanp)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Can we migrate to oslo.messaging?

2014-02-11 Thread Serg Melikyan
oslo.messaging  is a library
that provides RPC and Notifications API, they are part of the same library
for mostly historical reasons. One of the major goals of *oslo.messaging* is
to provide clean RPC and Notification API without any trace of messaging
queue concepts (but two of most advanced drivers used by oslo.messaging is
actually based on AMQP: RabbitMQ and QPID).

We were designing Murano on messaging queue concepts using some
AMQP/RabbitMQ specific features, like queue TTL. Since we never considered
communications between our components in terms of RPC or Notifications and
always thought about them as message exchange through broker it has
influenced our components architecture. In Murano we use simple
wrapper
 around Puka  (RabbitMQ client with most
simple and thoughtful async model) that is used in all our components.
We forked
Puka  since we had specific requirements
to SSL and could not yet merge our
work back
to master.

Can we abandon our own
wrapperaround
our own
fork of Puka  in favor of
*oslo.messaging*? *Yes*, but this migration may be tricky. I believe we can
migrate to *oslo.messaging* in a week or so*.*

I had played with *oslo.messaging* emulating our current communication
patterns with *oslo.messaging*, and I am certain that current
implementation can be migrated to *oslo.messaging**. *But I am not sure
that *oslo.messaging* may be easily suited to all future use-cases that we
plan to cover in a few next releases without major contributions. Please,
try to respond with any questions related to *oslo.messaging* implementation
and how it can be fitted with certain use-case.

Below, I tried to describe our current use-cases and what specific MQ
features we are using, how they may be implemented with *oslo.messaging *and
with what limitations we will face.

Use-Case
Murano has several components with communications between them based on
messaging queue:
*murano-api* -> *murano-conductor:*

   1. *murano-api* sends deployment tasks to murano-conductor

*murano-conductor* -> *murano-api:*

   1. *murano-conductor* reports to *murano-api* task progress
   during processing
   2. after processing, *murano-conductor* sends results to *murano-api*

*murano-conductor *->* murano-agent:*

   1. during task processing *murano-conductor* sends execution plans with
   commands to *murano-agent.*

Note: each of mentioned components above may have more than one instance.

One of great messaging queue specific that we heavily use is a idea of
queue itself, messages sent to component will be handled any time soon as
at least one instance would be started. For example, in case of
*murano-agent*, message is sent even before *murano-agent* is started.
Another one is queue life-time, we control life-time of
*murano-agent*queues to exclude overflow of MQ server with queues that
is not used
anymore.

One thing is also worse to mention: *murano-conductor* communicates with
several components at the same time: process several tasks at the same
time, during task processing *murano-conductor* sends progress
notifications to *murano-api* and execution plans to *murano-agent*.

Implementation
Please, refer to
Concepts
 section of *oslo.messaging* Wiki before further reading to grasp key
concepts expressed in *oslo.messaging* library. In short, using RPC API we
can 'call' server synchronously and receive some result, or 'cast'
asynchronously (no result is returned). Using Notification API we can send
Notification to the specified Target about happened event with specified
event_type, importance and payload.

If we move to *oslo.messaging* we can only primarily rely on features
provided by RPC/Notifications model:

   1. We should not rely on message delivery without other side is properly
   up and running. It is not a message delivery, it is *Remote Procedure
   Call;*
   2. To control queue life-time as we do now, we may be required to 'hack'
   *oslo.messaging* by writing own driver.

*murano-api* -> *murano-conductor:*

   1. *murano-api* sends deployment tasks to murano-conductor: *May be
   replaced with RPC Cast*

*murano-conductor* -> *murano-api:*

   1. *murano-conductor* reports to *murano-api* task progress
   during processing: *May be replaced with Notification* or *RPC Cast*
   2. after processing, *murano-conductor* sends results to *murano-api: **May
   be replaced with RPC Cast*

*murano-conductor *->* murano-agent:*

   1. during task processing *murano-conductor* sends execution plans with
   commands to *murano-agent*: *May be replaced with two way RPC Cast*
(murano-agent Cast
   to murano-conductor w

Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-11 Thread Edgar Magana
+1 of course!

Edgar 

Sent from my iPhone

> On Feb 11, 2014, at 8:28 AM, Mark McClain  wrote:
> 
> All-
> 
> I’d like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg 
> has been valuable contributor to Neutron by actively reviewing, working on 
> bugs, and contributing code.
> 
> Neutron cores please reply back with +1/0/-1 votes.
> 
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-11 Thread Robert Kukura
On 02/10/2014 06:28 PM, Mark McClain wrote:
> All-
> 
> I’d like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg 
> has been valuable contributor to Neutron by actively reviewing, working on 
> bugs, and contributing code.
> 
> Neutron cores please reply back with +1/0/-1 votes.

+1

> 
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] universal wheel support

2014-02-11 Thread Sascha Peilicke
On Monday 10 February 2014 19:02:55 Doug Hellmann wrote:
> On Mon, Feb 10, 2014 at 1:14 PM, Joe Gordon  wrote:
> > On Mon, Feb 10, 2014 at 9:00 AM, Doug Hellmann
> > 
> >  wrote:
> > > On Sat, Feb 8, 2014 at 7:08 PM, Monty Taylor 
> > 
> > wrote:
> > >> Hey all!
> > >> 
> > >> There are a bunch of patches adding:
> > >> 
> > >> [wheel]
> > >> universal = 1
> > >> 
> > >> to setup.cfg:
> > >> 
> > >> https://review.openstack.org/#/q/status:open+topic:wheel-publish,n,z
> > >> 
> > >> I wanted to follow up on what the deal is with them, and what I think
> > >> we
> > >> should do about them.
> > >> 
> > >> universal means that a wheel can be made that can work with any python.
> > >> That's awesome, and we want it - it makes the wheel publishing code
> > 
> > easier.
> > 
> > >> I don't think we want it turned on for any project that doesn't, in
> > 
> > fact,
> > 
> > >> support python3 - because we'd be producing a wheel that says it works
> > 
> > in
> > 
> > >> python3.
> > >> 
> > >> To be fair - the wheel itself will work just fine in python3 - it's
> > >> just
> > >> the software that doesn't - and we upload tarballs right now which
> > >> don't
> > >> block attempts to use them in python3.
> > >> 
> > >> SO -
> > >> 
> > >> my pedantic side says:
> > >> 
> > >> "Let's only land universal = 1 into python3 supporting projects"
> > >> 
> > >> upon further reflection, I think my other side says:
> > >> 
> > >> "It's fine, let's land it everywhere, it doesn't hurt anything, and
> > >> then
> > >> we can stop worrying about it"
> > >> 
> > >> Thoughts?
> > > 
> > > Do we have any non-library projects that support python 3?
> > 
> > yes, python-novaclient and many more
> > 
> > https://wiki.openstack.org/wiki/Python3
> 
> OK, the clients always register with me as "libraries" even though they
> include the command line programs.
> 
> Are we publishing wheels for any of the service apps where python 3 is not
> supported?
> 
> It seems safe to just go ahead and use the universal flag, but how much
> work is it to be "correct" and only set the flags for projects that are
> actually universal? 

It depends what is viewed "actually universal". If the codebase is valid py3k 
code or if it actually works. As in all deps are ready. I guess a first step 
would be to bring the python3 status page up-to-date and keep that in sync 
with "universal" wheels.

> What are the ramifications of not using the flag
> everywhere?

We wouldn't be ultra-correct and have to track this. But I don't see this as 
realy work, it should go hand in hand with our Python3 porting efforts.
-- 
With kind regards,
Sascha Peilicke
SUSE Linux GmbH, Maxfeldstr. 5, D-90409 Nuernberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer HRB 16746 (AG Nürnberg)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-11 Thread Clint Byrum
Excerpts from Thomas Spatzier's message of 2014-02-11 00:38:53 -0800:
> Hi Clint,
> 
> thanks for writing this down. This is a really interesting use case and
> feature, also in relation to what was recently discussed on rolling
> updates.
> 
> I have a couple of thoughts and questions:
> 
> 1) The overall idea seems clear to me but I have problems understanding the
> detailed flow and relation to template definitions and metadata. E.g. in
> addition to the examples you gave in the linked etherpad, where would the
> script or whatever sit that handles the update etc.
> 

At the risk of sounding curt and unfeeling: I really don't care how your
servers do their job or talk to heat.. as long as they use the API. :)

Heat is an orchestration tool, and putting script code or explicit tool
callouts in templates is obscene to me. I understand other people are
stuck with vanilla images, but I am not, so I have very little time to
spend thinking about how that is done.

In TripleO, os-collect-config periodically polls the metadata and runs
os-refresh-config when it changes. os-refresh-config just runs a bunch
of scripts. The scripts use os-apply-config to interpret the metadata
section, either through mustache templates or like this:

ACTION=$(os-apply-config --key action.pending --key-default '' --type raw)
case $ACTION in
rebuild)
delete)
  migrate_to_something_else
  ping_the_handle $(os-apply_config --key action.handle --type url)
  ;;
*)
  ;;
esac
  
We just bake all of this into our images.

> 2) I am not a big fan of CFN WaitConditions since they let too much
> programming shine thru in a template. So I wonder whether this could be
> made more transparent to the template writer. The underlying mechanism
> could still be the same, but maybe we could make the template look cleaner.
> For example, what Steve Baker is doing for software orchestration also uses
> the underlying mechanisms but does not expose WaitConditions in templates.
>

Yeah we could allocate the handle transparently without much difficulty.
However, to be debuggable we'd have to make it available as an attribute
of the server if we don't have a resource directly queryable for it. Not
sure if Steve has done that but it is pretty important to be able to
compare the handle URL you see in the stack to the one you see on the
server.

> 3) Has the issue of how to express update policies on the rolling updates
> thread been resolved? I followed that thread but seems like there has not
> been a final decision. The reason I am bringing this up is because I think
> this is related. You are suggesting to establish a new top-level section
> 'action_hooks' in a resource. Rendering this top-level in the resource is a
> good thing IMO. However, since this is related to updates in a way (you
> want to react to any kind of update event to the resource's state), I
> wonder if those hooks could be attributes of an update policy. UpdatePolicy
> in CFN is also a top-level section in a resource and they seem to provide a
> default one like the following (I am writing this in snake case as we would
> render it in HOT:
> 
> resources:
>   autoscaling_group1:
> type: AWS::AutoScaling::AutoScalingGroup
> properties:
>   # the properties ...
> update_policy:
>   auto_scaling_rolling_update:
> min_instances_in_server: 1
> max_batch_size: 1
> pause_time: PT12M5S
> 
> (I took this from the CFN user guide).
> I.e. an update policy already is a complex data structure, and we could
> define additional types that include the resource hooks definitions you
> need. ... I don't fully understand the connection between 'actions' and
> 'path' in your etherpad example yet, so cannot define a concrete example,
> but I hope you get what I wanted to express.
> 

This also works on stack-delete. Perhaps delete is just a special update
that replaces the template with '', but update_policy seems a bit off
base given this. The two features (rolling-updates and update-hooks)
seem related, but I think only because they'd both be more useful with
the other available.

> 4) What kind of additional metadata for the update events are you thinking
> about? For example, in case this is done in an update case with a batch
> size of > 1 (i.e. you update multiple members in a cluster at a time) -
> unless I put too much interpretation in here concerning the relation to
> rolling updates - you would probably want to tell the server a black list
> of servers to which it should not migrate workload, because they will be
> taken down as well.
> 

Agreed, for rolling updates we'd need some additional clues. For just
doing explicit servers, we can think a little more statically and just
do a bit shift of the workloads (first server sends to last.. second to
first.. etc).

> 
> As I said, just a couple of thoughts, and maybe for some I am just
> mis-understanding some details.
> Anyway, I would be interested in your view.
> 

Your thoughts are most appreciated!

_

Re: [openstack-dev] [infra] Proposing Sergey Lukjanov for infra-core

2014-02-11 Thread Thierry Carrez
James E. Blair wrote:
> I'm very pleased to propose that we add Sergey Lukjanov to the
> infra-core team.
> 
> He is among the top reviewers of projects in openstack-infra, and is
> very familiar with how jenkins-job-builder and zuul are used and
> configured.  He has done quite a bit of work in helping new projects
> through the process and ensuring that changes to the CI system are
> correct.  In addition to providing very helpful reviews he has also
> contributed significant patches to our Python projects illustrating a
> high degree of familiarity with the code base and project direction.
> And as a bonus, we're all looking forward to once again having an
> infra-core member in a non-US time zone!

+1!

My only fear is that Sergey is going in too many directions at the same
time (remember he is also the Savanna^WSahara/Caravan/Batyr/Slonik PTL)
and might burn out. But so far he has been gracefully handling the
increasing load... so let's see how that goes :)

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?

2014-02-11 Thread Alan Pevec
Hi Mark and Anita,

could we declare stable/havana neutron gate jobs good enough at this point?
There are still random failures as this no-op change shows
https://review.openstack.org/72576
but I don't think they're stable/havana specific.

>> Do we have a list of those somewhere?
> Pulled out where following Neutron patches (IMHO all innocent for gate
> breaking):
> https://review.openstack.org/62206
> https://review.openstack.org/67214
> https://review.openstack.org/70232

I've resubmitted those without "Removed from the gate..." lines in the
commit message, waiting for rechecks now.

>> I'm particularly interested in https://review.openstack.org/#/c/66149/ as a 
>> fix for
>> https://bugs.launchpad.net/keystone/+bug/1251123

This one is last remaining exception request for 2013.2.2 and is
waiting for the master change to be reviewed:
https://review.openstack.org/#/q/Ida39b4699ed6c568609a5121573fc3be5c4ab2f4,n,z
I hope keystone core could review this one quickly so that backport
can be updated and merged.

Thanks!
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-11 Thread Salvatore Orlando
+1
Il 11/feb/2014 10:47 "Gary Kotton"  ha scritto:

> +1
>
>
> On 2/11/14 1:28 AM, "Mark McClain"  wrote:
>
> >All-
> >
> >I¹d like to nominate Oleg Bondarev to become a Neutron core reviewer.
> >Oleg has been valuable contributor to Neutron by actively reviewing,
> >working on bugs, and contributing code.
> >
> >Neutron cores please reply back with +1/0/-1 votes.
> >
> >mark
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
> >bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
> >H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=KKs5yVeNC6c7WnUVDuIoU
> >h%2BlWzSzcE1BVaTK%2B71PUM0%3D%0A&s=b38d5081ce0f288403c87ba38281d93cd1796c9
> >3f5482ded0916ee3b26e54774
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][All] tox 1.7.0 error while running tests

2014-02-11 Thread Dmitry Tantsur
Hi. This seems to be related:
https://bugs.launchpad.net/openstack-ci/+bug/1274135
We also encountered this.

On Tue, 2014-02-11 at 14:56 +0530, Swapnil Kulkarni wrote:
> Hello,
> 
> 
> I created a new devstack environment today and installed tox 1.7.0,
> and getting error "tox.ConfigError: ConfigError: substitution key
> 'posargs' not found".
> 
> 
> Details in [1].
> 
> 
> Anybody encountered similar error before? Any workarounds/updates
> needed?
> 
> 
> [1] http://paste.openstack.org/show/64178/
> 
> 
> 
> 
> Best Regards,
> Swapnil Kulkarni
> irc : coolsvap
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-11 Thread Gary Kotton
+1


On 2/11/14 1:28 AM, "Mark McClain"  wrote:

>All-
>
>I¹d like to nominate Oleg Bondarev to become a Neutron core reviewer.
>Oleg has been valuable contributor to Neutron by actively reviewing,
>working on bugs, and contributing code.
>
>Neutron cores please reply back with +1/0/-1 votes.
>
>mark
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-
>bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=e
>H0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=KKs5yVeNC6c7WnUVDuIoU
>h%2BlWzSzcE1BVaTK%2B71PUM0%3D%0A&s=b38d5081ce0f288403c87ba38281d93cd1796c9
>3f5482ded0916ee3b26e54774


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][All] tox 1.7.0 error while running tests

2014-02-11 Thread Andreas Jaeger
On 02/11/2014 10:26 AM, Swapnil Kulkarni wrote:
> Hello,
> 
> I created a new devstack environment today and installed tox 1.7.0, and
> getting error *"tox.ConfigError: ConfigError: substitution key 'posargs'
> not found".*
> 
> Details in [1].
> 
> Anybody encountered similar error before? Any workarounds/updates needed?
> 
> [1] http://paste.openstack.org/show/64178/

Sounds like:

 https://bitbucket.org/hpk42/tox/issue/150/posargs-configerror


Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Sylvain Bauza
2014-02-10 18:45 GMT+01:00 Chris Friesen :

>
> In that model, we would pass a bunch of information about multiple
> resources to the solver scheduler, have it perform scheduling *and reserve
> the resources*, then return some kind of resource reservation tokens back
> to the caller for each resource.  The caller could then allocate each
> resource, pass in the reservation token indicating both that the resources
> had already been reserved as well as what the specific resource that had
> been reserved (the compute-host in the case of an instance, for example).
>
> Chris
>
>
Please be aware that Climate [1] already exists for managing resources
reservations. That doesn't make sense and has been discussed during last
summit that reservations should be managed by Nova, but rather by another
service.

-Sylvain

[1] : https://launchpad.net/climate
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][All] tox 1.7.0 error while running tests

2014-02-11 Thread Assaf Muller


- Original Message -
> From: "Swapnil Kulkarni" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Tuesday, February 11, 2014 11:26:29 AM
> Subject: [openstack-dev] [Openstack-dev][All] tox 1.7.0 error while running   
> tests
> 
> Hello,
> 
> I created a new devstack environment today and installed tox 1.7.0, and
> getting error "tox.ConfigError: ConfigError: substitution key 'posargs' not
> found".
> 
> Details in [1].
> 
> Anybody encountered similar error before? Any workarounds/updates needed?

It's a known issue, and for a workaround personally I downgraded to tox 1.6.1.

> 
> [1] http://paste.openstack.org/show/64178/
> 
> 
> Best Regards,
> Swapnil Kulkarni
> irc : coolsvap
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev][All] tox 1.7.0 error while running tests

2014-02-11 Thread Swapnil Kulkarni
Hello,

I created a new devstack environment today and installed tox 1.7.0, and
getting error *"tox.ConfigError: ConfigError: substitution key 'posargs'
not found".*

Details in [1].

Anybody encountered similar error before? Any workarounds/updates needed?

[1] http://paste.openstack.org/show/64178/


Best Regards,
Swapnil Kulkarni
irc : coolsvap
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral]

2014-02-11 Thread Dmitri Zimine
Do we have (or think about) a shorthand to calling REST_API action, without 
defining a service? 

FULL  DSL:

Services:
  TimeService:
type: REST_API
parameters:
  baseUrl:http://api.timezonedb.com
  key:
actions:
  get-time:
task-parameters:
  zone:
Workflow:
   tasks:
  timeInToronto:
 action: TimeService:get-time
 parameters:
   zone: "America/Toronto"
   
SHORTCUT - may look something like this: 

Workflow:
   tasks:
   timeInToronto:
action:std:REST_API
parameters:
  baseUrl: "http://api.timezonedb.com";
  method: "GET"
  parameters: "zone=/America/Toronto&key="
  
Why asking:  

1) analogy with std:send-email action. I wonder do we have to make user define 
Service for std:send-email? and I think that for standard tasks we shouldn't 
have to. If there is any thinking on REST_API, it may apply here. 

2) For a one-off web service calls the complete syntax is may be overkill (but 
yes, it comes handy for reuse). See examples below. 




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

2014-02-11 Thread Khanh-Toan Tran
> Second, there is nothing wrong with booting the instances (or
instantiating other
> resources) as separate commands as long as we support some kind of
> reservation token.

I'm not sure what reservation token would do, is it some kind of informing
the scheduler that the resources would not be initiated until later ?
Let's consider a following example:

A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
left, and another with 30 GB RAM left, using Filter Scheduler's default
RamWeigher.

If we pass the demand as two commands, there is a chance that the small VM
arrives first. RamWeigher will put it in the 50 GB RAM host, which will be
reduced to 30 GB RAM. Then, when the big VM request arrives, there will be
no space left to host it. As a result, the whole demand is failed.

Now if we can pass the two VMs in a command, SolverScheduler can put their
constraints all together into one big LP as follow (x_uv = 1 if VM u is
hosted in host v, 0 if not):

  50GB RAM host constraint:  20 *x_11 + 40 * x_21 <=50
  30GB RAM host constraint:  20 *x_12 + 40 * x_22 <=30
  Small VM presence constraint:x_11 + x_12 = 1
  Big VM presence constraint:x_21 + x_22 = 1

>From these constraints there is only one root that is: x_11 = 0, x12 = 1;
x_21 = 1; x_22 = 0; i.e, small VM hosted in 30 GB RAM host, and big VM
hosted in 50 GB RAM host.

As a conclusion, if we have VMs of multiple flavors to deal with, we
cannot give the correct answer if we do not have all information.
Therefore, if by reservation you mean that the scheduler would hold off
the scheduling process and save the information until it receives all
necessary information, then I'm agreed. But it just a workaround of
passing the whole demand as a whole, which would better be handled by an
API.

That responses to your first point, too. If we don't mind that some VMs
are placed and some are not (e.g. they belong to different apps), then
it's OK to pass them to the scheduler without Instance Group. However, if
the VMs are together (belong to an app), then we have to put them into an
Instance Group.

> -Message d'origine-
> De : Chris Friesen [mailto:chris.frie...@windriver.com]
> Envoyé : lundi 10 février 2014 18:45
> À : openstack-dev@lists.openstack.org
> Objet : Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and
Solver
> Scheduler
>
> On 02/10/2014 10:54 AM, Khanh-Toan Tran wrote:
>
> > Heat
> > may orchestrate the provisioning process, but eventually the instances
> > will be passed to Nova-scheduler (Gantt) as separated commands, which
> > is exactly the problem Solver Scheduler wants to correct. Therefore
> > the Instance Group API is needed, wherever it is used
(nova-scheduler/Gantt).
>
> I'm not sure that this follows.
>
> First, the instance groups API is totally separate since we may want to
schedule
> a number of instances simultaneously without them being part of an
instance
> group.  Certainly in the case of using instance groups that would be one
input
> into the scheduler, but it's an optional input.
>
> Second, there is nothing wrong with booting the instances (or
instantiating other
> resources) as separate commands as long as we support some kind of
> reservation token.
>
> In that model, we would pass a bunch of information about multiple
resources
> to the solver scheduler, have it perform scheduling *and reserve the
resources*,
> then return some kind of resource reservation tokens back to the caller
for each
> resource.  The caller could then allocate each resource, pass in the
reservation
> token indicating both that the resources had already been reserved as
well as
> what the specific resource that had been reserved (the compute-host in
the case
> of an instance, for example).
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-11 Thread Flavio Percoco

On 10/02/14 05:28 +, Eiichi Aikawa wrote:

Hi, all,

Thanks for your comment.

Please let me re-explain.
The main purpose of our blueprint is to use network resources more efficiently.

To complete this purpose, we suggested the method of using 2 lists.
We think, as I wrote before, by listing near glance API servers and using them,
the total amount of data transfer across the networks can be reduced.
Especially, in case of using microserver, communication destination can be
limited within same chassis.

In addition, we think we can resume failed server during glance API server
on secondary list are used. As a result, we can provide higher availability
than current spec.

This bp can provide high efficiency and high availability.
But it seems you think our idea was not so good.

Please let me know your idea which component should be changed.


I understood that. I just don't think Nova is the right place to do
it. I think this requires more than a list of weighted glance-api
nodes in the compute server. If we want do this right, IMHO, we should
be adding more info the endpoint lists (like location) in keystone and
use that info from Glance client to determine which glance-api the
compute should talk to.

I'm assuming you're planning to add a new configuration option to Nova
into which you'll be able to specify a list of Glance nodes. If this
is True, I'd highly discourage doing that. Nova has enough
configuration options already and the compute nodes configs are quite
different already. Adding this would mean making nova do things it
shouldn't do and making it's configuration more complex than it is
already.

That said, I think the idea of selecting the image nodes that nova
speaks to is a great idea so by all means, keep investigating it but
try to make it not nova specific.

[...]

Cheers,
Fla.


--
@flaper87
Flavio Percoco


pgpBkffPcCNXX.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-11 Thread Sergey Lukjanov
Hi Clint,

nice blueprint. I've added a section about Savanna to the etherpad. Our
further usecase looks like the Trove's one - we probably like to support
resizing nodes. Additionally, we'd like to decommission data nodes before
reboot/shutdown them.

Thanks.


On Tue, Feb 11, 2014 at 9:22 AM, Clint Byrum  wrote:

> Hi, so in the previous thread about rolling updates it became clear that
> having in-instance control over updates is a more fundamental idea than
> I had previously believed. During an update, Heat does things to servers
> that may interrupt the server's purpose, and that may cause it to fail
> subsequent things in the graph.
>
> Specifically, in TripleO we have compute nodes that we are managing.
> Before rebooting a machine, we want to have a chance to live-migrate
> workloads if possible, or evacuate in the simpler case, before the node
> is rebooted. Also in the case of a Galera DB where we may even be running
> degraded, we want to ensure that we have quorum before proceeding.
>
> I've filed a blueprint for this functionality:
>
> https://blueprints.launchpad.net/heat/+spec/update-hooks
>
> I've cobbled together a spec here, and I would very much welcome
> edits/comments/etc:
>
> https://etherpad.openstack.org/p/heat-update-hooks
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] in-instance update hooks

2014-02-11 Thread Thomas Spatzier
Hi Clint,

thanks for writing this down. This is a really interesting use case and
feature, also in relation to what was recently discussed on rolling
updates.

I have a couple of thoughts and questions:

1) The overall idea seems clear to me but I have problems understanding the
detailed flow and relation to template definitions and metadata. E.g. in
addition to the examples you gave in the linked etherpad, where would the
script or whatever sit that handles the update etc.

2) I am not a big fan of CFN WaitConditions since they let too much
programming shine thru in a template. So I wonder whether this could be
made more transparent to the template writer. The underlying mechanism
could still be the same, but maybe we could make the template look cleaner.
For example, what Steve Baker is doing for software orchestration also uses
the underlying mechanisms but does not expose WaitConditions in templates.

3) Has the issue of how to express update policies on the rolling updates
thread been resolved? I followed that thread but seems like there has not
been a final decision. The reason I am bringing this up is because I think
this is related. You are suggesting to establish a new top-level section
'action_hooks' in a resource. Rendering this top-level in the resource is a
good thing IMO. However, since this is related to updates in a way (you
want to react to any kind of update event to the resource's state), I
wonder if those hooks could be attributes of an update policy. UpdatePolicy
in CFN is also a top-level section in a resource and they seem to provide a
default one like the following (I am writing this in snake case as we would
render it in HOT:

resources:
  autoscaling_group1:
type: AWS::AutoScaling::AutoScalingGroup
properties:
  # the properties ...
update_policy:
  auto_scaling_rolling_update:
min_instances_in_server: 1
max_batch_size: 1
pause_time: PT12M5S

(I took this from the CFN user guide).
I.e. an update policy already is a complex data structure, and we could
define additional types that include the resource hooks definitions you
need. ... I don't fully understand the connection between 'actions' and
'path' in your etherpad example yet, so cannot define a concrete example,
but I hope you get what I wanted to express.

4) What kind of additional metadata for the update events are you thinking
about? For example, in case this is done in an update case with a batch
size of > 1 (i.e. you update multiple members in a cluster at a time) -
unless I put too much interpretation in here concerning the relation to
rolling updates - you would probably want to tell the server a black list
of servers to which it should not migrate workload, because they will be
taken down as well.


As I said, just a couple of thoughts, and maybe for some I am just
mis-understanding some details.
Anyway, I would be interested in your view.

Regards,
Thomas


Clint Byrum  wrote on 11/02/2014 06:22:54:

> From: Clint Byrum 
> To: openstack-dev 
> Date: 11/02/2014 06:30
> Subject: [openstack-dev] [Heat] in-instance update hooks
>
> Hi, so in the previous thread about rolling updates it became clear that
> having in-instance control over updates is a more fundamental idea than
> I had previously believed. During an update, Heat does things to servers
> that may interrupt the server's purpose, and that may cause it to fail
> subsequent things in the graph.
>
> Specifically, in TripleO we have compute nodes that we are managing.
> Before rebooting a machine, we want to have a chance to live-migrate
> workloads if possible, or evacuate in the simpler case, before the node
> is rebooted. Also in the case of a Galera DB where we may even be running
> degraded, we want to ensure that we have quorum before proceeding.
>
> I've filed a blueprint for this functionality:
>
> https://blueprints.launchpad.net/heat/+spec/update-hooks
>
> I've cobbled together a spec here, and I would very much welcome
> edits/comments/etc:
>
> https://etherpad.openstack.org/p/heat-update-hooks
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ready to import Launchpad Answers into Ask OpenStack

2014-02-11 Thread Sergey Lukjanov
Thank you!


On Tue, Feb 11, 2014 at 11:36 AM, Stefano Maffulli wrote:

> Hi Sergey
>
> On Tue 11 Feb 2014 08:01:48 AM CET, Sergey Lukjanov wrote:
> > Stefano, is it possible to import savanna's answers@launchpad to
> ask.o.o?
>
> Yes, indeed. I apologize for not putting you explicitly in the cc list.
> The full list of projects we'll work on is on the bug itself:
>
>  https://bugs.launchpad.net/openstack-community/+bug/1212089
>
> savanna was already there, I must have overlooked your email when
> composing this request. We'll be importing during the next few days.
>
> Regards,
> Stef
>
> --
> Ask and answer questions on https://ask.openstack.org
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 framework?

2014-02-11 Thread Mathieu Rohon
Hi,

mellanox is also developing a ML2 driver :
https://blueprints.launchpad.net/neutron/+spec/mlnx-ml2-support

The Havana release is already out, and we are currently working for
IceHouse. But the code for IceHouse should be under review before feb.
18th. So it would be difficult to have your code included in IceHouse.
I think you'd better targeting Juno, the next release.

On Tue, Feb 11, 2014 at 7:40 AM, Yang, Yi Y  wrote:
> Thank you for your detailed info, but I want to implement this in Havana 
> release, mlnx is a good reference, what I want to implement on Intel NIC is 
> similar to mlnx, but it is a standalone plugin and didn't use ML2 framework, 
> I want to use ML2 framework, I think nova has supported SR-IOV in Havana, so 
> I just need to implement Neutron part, I hope you can provide some guide 
> about this. BTW, We can't afford to wait Icehouse release.
>
> -Original Message-
> From: Irena Berezovsky [mailto:ire...@mellanox.com]
> Sent: Monday, February 10, 2014 8:11 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Yang, Yi Y
> Subject: RE: [openstack-dev] How to write a new neutron L2 plugin using ML2 
> framework?
>
> Hi,
> As stated below, we are already having this work both in nova and neuron.
> Please take a look at the following discussions:
> https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting
>
> For neutron part there are two different flavors that are coming as part of 
> this effort:
> 1. Cisco SRIOV supporting 802.1QBH - no L2 agent 2. Mellanox Flavor - SRIOV 
> embedded switch ("HW_VEB") - with L2 agent.
> My guess is that second flavor of SRIOV embedded switch should work for Intel 
> NICs as well.
>
> Please join the PCI pass-through meeting discussions to see that you do not 
> do any redundant work or just follow-up on mailing list.
>
> BR,
> Irena
>
>
> -Original Message-
> From: Mathieu Rohon [mailto:mathieu.ro...@gmail.com]
> Sent: Monday, February 10, 2014 1:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] How to write a new neutron L2 plugin using ML2 
> framework?
>
> Hi,
>
> SRIOV is under implementation in nova and neutron. Did you have a look to :
> https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support
> https://blueprints.launchpad.net/neutron/+spec/ml2-binding-profile
> https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type
> https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov
>
>
> On Mon, Feb 10, 2014 at 7:27 AM, Isaku Yamahata  
> wrote:
>> On Sat, Feb 08, 2014 at 03:49:46AM +, "Yang, Yi Y"
>>  wrote:
>>
>>> Hi, All
>>
>> Hi.
>>
>>
>>> I want to write a new neutron L2 plugin using ML2 framework, I noticed 
>>> openvswitch and linxubridge have been ported into ML2 framework, but it 
>>> seems many code is removed compared to standalone L2 plugin, I guess some 
>>> code has been written into a common library. Now I want to write a L2 
>>> plugin to enable switch for a SR-IOV 10g NIC, I think I need to write as 
>>> follows:
>>
>
> having such a feature would be awesome : did you fill a BP for that?
>
>>
>>> 1. a new mechanism driver neutron/plugins/ml2/drivers/mech_XXX.py, but from 
>>> source code, it seems nothing to do.
>
> You mean, you want to use AgentMechanismDriverBase directly? this is an 
> abstract class du to check_segment_for_agent method.
>
>>
>> This requires to define how your plugin utilize network.
>> If multi tenant network is wanted, what/how technology will be used.
>> The common one is VLAN or tunneling(GRE, VXLAN).
>> This depends on what feature your NIC supports.
>>
>
>>> 2. a new agent neutron/plugins/XXX/ XXX_neutron_plugin.py
>
> I don't know if this would be mandatory. May be you can just add necessary 
> informations with extend_port_dict while your MD bind the port, as proposed 
> by this patch :
> https://review.openstack.org/#/c/69783/
>
> Nova will then configure the port correctly. The only need for an agent would 
> be to populate the agent DB with supported segment types, so that during 
> bind_port, the MD find an appropriate segment (with check_segment_for_agent).
>
>>>
>>> After this, an issue it how to let neutron know it and load it by default 
>>> or by configuration. Debugging is also an issue, nobody can write code 
>>> correctly once :-),  does neutron have any good debugging way for a newbie?
>>
>> LOG.debug and debug middle ware.
>> If there are any other better way, I'd also like to know.
>>
>> thanks,
>>
>>> I'm very eager to be able to get your help and sincerely thank you in 
>>> advance.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> --
>> Isaku Yamahata 
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-b