Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-04 Thread victor stinner
Hi,

I promise a status of my work on Trollius and greenio to Mark, but it's not 
easy to summarize it because there are still a few pending patches to implement 
the final greenio executor. There are different parts: asyncio, Trollius, 
greenio, Olso Messaging.


The design of the asyncio is the PEP 3156 (*) which was accepted and 
implemented in Python 3.4, released 4 months ago. After the released of Python 
3.4, many bugs were fixed in asyncio. The API is stable, it didn't change (and 
it cannot change because backward compatibility matters in Python, even if the 
module is still tagged as "provisional" in Python 3.4).

   http://legacy.python.org/dev/peps/pep-3156/


Since January, I released regulary new versions of Trollius. Trollius API is 
the same than the asyncio API, except of the syntax of coroutines:

   http://trollius.readthedocs.org/#differences-between-trollius-and-tulip

The next Trollius release will probably be the version 1.0 because I consider 
that the API is now stable. Last incompatible changes were made to make 
Trollius look closer to asyncio, and to ease the transition from Trollius to 
asyncio. I also renamed the module from "asyncio" to "trollius" to support 
Python 3.4 (which already has an "asyncio" module in the standard library) and 
to make it more explicit than Trollius coroutines are different than asyncio 
coroutines.


The greenio project was written for asyncio and it is available on PyPI. 
greenio only support a few features of asyncio, in short: it only supports 
executing coroutines. But we only need this feature in Oslo Messaging. I sent a 
pull request to port greenio to Trollius:

   https://github.com/1st1/greenio/pull/5/

The pull request requires a new "task factory": I sent a patch to asyncio for 
that.


For Oslo Messaging, my change to poll with a timeout has been merged. (I just 
sent a fix because my change doesn't work with RabbitMQ.) I will work on the 
greenio executor when other pending patches will be merged. We talked with Mark 
about this greenio executor. It will be based on the eventlet executor, with a 
few lines to support Trollius coroutines. We also have to modify the notifier 
to support to pass an optional "execute" function which executes the endpoint 
function which may be a coroutine. According to Mark, this change is short and 
acceptable in Olso Messaging: thanks to the "execute" function, it will be 
possible to restrict code using greenio in the greenio executor (no need to put 
greenio nor trollius everywhere in Oslo Messaging).


I listed a lot of projects and pending patches, but I expect that all pieces of 
the puzzle with be done before the end of the month. We are very close to 
having a working greenio executor in Oslo Messaging ;-)


Victor


- Mail original -
> De: "Mark McLoughlin" 
> À: openstack-dev@lists.openstack.org
> Envoyé: Jeudi 3 Juillet 2014 17:27:58
> Objet: [openstack-dev] [oslo] Asyncio and oslo.messaging
> 
> Hey
> 
> This is an attempt to summarize a really useful discussion that Victor,
> Flavio and I have been having today. At the bottom are some background
> links - basically what I have open in my browser right now thinking
> through all of this.
> 
> We're attempting to take baby-steps towards moving completely from
> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> first victim.
> 
> Ceilometer's code is run in response to various I/O events like REST API
> requests, RPC calls, notifications received, etc. We eventually want the
> asyncio event loop to be what schedules Ceilometer's code in response to
> these events. Right now, it is eventlet doing that.
> 
> Now, because we're using eventlet, the code that is run in response to
> these events looks like synchronous code that makes a bunch of
> synchronous calls. For example, the code might do some_sync_op() and
> that will cause a context switch to a different greenthread (within the
> same native thread) where we might handle another I/O event (like a REST
> API request) while we're waiting for some_sync_op() to return:
> 
>   def foo(self):
>   result = some_sync_op()  # this may yield to another greenlet
>   return do_stuff(result)
> 
> Eventlet's infamous monkey patching is what make this magic happen.
> 
> When we switch to asyncio's event loop, all of this code needs to be
> ported to asyncio's explicitly asynchronous approach. We might do:
> 
>   @asyncio.coroutine
>   def foo(self):
>   result = yield from some_async_op(...)
>   return do_stuff(result)
> 
> or:
> 
>   @asyncio.coroutine
>   def foo(self):
>   fut = Future()
>   some_async_op(callback=fut.set_result)
>   ...
>   result = yield from fut
>   return do_stuff(result)
> 
> Porting from eventlet's implicit async approach to asyncio's explicit
> async API will be seriously time consuming and we need to be able to do
> it piece-by-piece.
> 
> The question then becomes what do we need to do in order to port a
> singl

[openstack-dev] [nova] Limits API and project-user-quotas

2014-07-04 Thread Day, Phil
Hi Folks,

Working on the server groups quotas I hit an issue with the limits API which I 
wanted to get feedback on.

Currently this always shows just the project level quotas and usage, which can 
be confusing if there is a lower user specific quota.  For example:

Project Quota = 10
User Quota = 1
User Usage = 1
Other User Usage = 2

If we show just the overall project usage and quota we get (used=3, quota=10) - 
which suggest that the quota is not fully used, and I can go ahead and create 
something.

However if we show the user quotas we get (used=1, quota=1), which shows 
correctly that I would get a quota error on creation.


But if we do switch to returning the used view of quotas and usage we can get a 
different problem:

Project Quota = 10
User Quota = 5
User Usage = 1
Other User Usage = 9

Now if we show just the user quotas we get (used=1, quota=5), which suggests 
that there is capacity when in fact there isn't.

Whereas if we just return the overall project usage and quota (current 
behavior) we get (used=10, quota=10) - which shows that the project quota is 
fully used.


It kind of feels as if really we need to return both the project and per user 
values if the results are going to be reliable in the face of 
project-user-quotas, but that led me to thinking whether a user that has been 
given a specific quota is meant to eb able to see the corresponding overall 
project level quota ?

The quota API itself allows a user to get either the project level quota or any 
per-user quota within that project - which does make all of the information 
available even if it is a tad odd that the default (no user specified) is to 
see the overall quota rather than the one that apples to the user making the  
request.   They can't however via the quotas API find out project level usage.

Thoughts on what the correct model is here ?

Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone][Horizon] Proposed Changed for Unscoped tokens.

2014-07-04 Thread Adam Young
Unscoped tokens are really a proxy for the Horizon session, so lets 
treat them that way.



1.  When a user authenticates unscoped, they should get back a list of 
their projects:


some thing along the lines of:

domains [{   name = d1,
 projects [ p1, p2, p3]},
   {   name = d2,
 projects [ p4, p5, p6]}]

Not the service catalog.  These are not in the token, only in the 
response body.



2.  Unscoped tokens are only initially via HTTPS and require client 
certificate validation or Kerberos authentication from Horizon. Unscoped 
tokens are only usable from the same origin as they were originally 
requested.



3.  Unscoped tokens should be very short lived:  10 minutes. Unscoped 
tokens should be infinitely extensible:   If I hand an unscoped token to 
keystone, I get one good for another 10 minutes.



4.  Unscoped tokens are only accepted in Keystone.  They can only be 
used to get a scoped token.  Only unscoped tokens can be used to get 
another token.



Comments?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-04 Thread Brandon Logan
Hi German,

That actually brings up another thing that needs to be done.  There is
no DELETED state.  When an entity is deleted, it is deleted from the
database.  I'd prefer a DELETED state so that should be another feature
we implement afterwards.

Thanks,
Brandon

On Thu, 2014-07-03 at 23:02 +, Eichberger, German wrote:
> Hi Jorge,
> 
> +1 for QUEUED and DETACHED
> 
> I would suggest to make the time how long we keep entities in DELETED state 
> configurable. We use something like 30 days, too, but we have made it 
> configurable to adapt to changes...
> 
> German
> 
> -Original Message-
> From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
> Sent: Thursday, July 03, 2014 11:59 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not 
> exist in a driver backend
> 
> +1 to QUEUED status.
> 
> For entities that have the concept of being attached/detached why not have a 
> 'DETACHED' status to indicate that the entity is not provisioned at all (i.e. 
> The config is just stored in the DB). When it is attached during provisioning 
> then we can set it to 'ACTIVE' or any of the other provisioning statuses such 
> as 'ERROR', 'PENDING_UPDATE', etc. Lastly, it wouldn't make much sense to 
> have a 'DELETED' status on these types of entities until the user actually 
> issues a DELETE API request (not to be confused with detaching). Which begs 
> another question, when items are deleted how long should the API return 
> responses for that resource? We have a 90 day threshold for this in our 
> current implementation after which the API returns 404's for the resource.
> 
> Cheers,
> --Jorge
> 
> 
> 
> 
> On 7/3/14 10:39 AM, "Phillip Toohill" 
> wrote:
> 
> >If the objects remain in 'PENDING_CREATE' until provisioned it would 
> >seem that the process got stuck in that status and may be in a bad 
> >state from user perspective. I like the idea of QUEUED or similar to 
> >reference that the object has been accepted but not provisioned.
> >
> >Phil
> >
> >On 7/3/14 10:28 AM, "Brandon Logan"  wrote:
> >
> >>With the new API and object model refactor there have been some issues 
> >>arising dealing with the status of entities.  The main issue is that 
> >>Listener, Pool, Member, and Health Monitor can exist independent of a 
> >>Load Balancer.  The Load Balancer is the entity that will contain the 
> >>information about which driver to use (through provider or flavor).  
> >>If a Listener, Pool, Member, or Health Monitor is created without a 
> >>link to a Load Balancer, then what status does it have?  At this point 
> >>it only exists in the database and is really just waiting to be 
> >>provisioned by a driver/backend.
> >>
> >>Some possibilities discussed:
> >>A new status of QUEUED, PENDING_ACTIVE, SCHEDULED, or some other name 
> >>Entities just remain in PENDING_CREATE until provisioned by a driver 
> >>Entities just remain in ACTIVE until provisioned by a driver
> >>
> >>Opinions and suggestions?
> >>___
> >>OpenStack-dev mailing list
> >>OpenStack-dev@lists.openstack.org
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Blueprints process

2014-07-04 Thread Dmitry Pyzhov
Do not cheat. If we need to add functionality after feature freeze, then
let add functionality after feature freeze. No reason for additional
obfuscation. It will make our workflow for blueprints harder, but it will
help us. We will see what we are really going to do and plan our work
better.

Also we can create a beta iso with all features in 'beta available' status.
It will help to make sure that small improvements are not break anything
and can be merged without any fear.


On Tue, Jul 1, 2014 at 3:00 PM, Vladimir Kuklin 
wrote:

> I have some objections. We are trying to follow a strict development
> workflow with feature freeze stage. In this case we will have to miss small
> enhancements that can emerge after FF date and can bring essential benefits
> along with small risks of breaking anything (e.g. changing some config
> options for galera or other stuff). We maintained such small changes as
> bugs because of this FF rule. As our project is growing, these last minute
> calls for small changes are going to be more and more probable. My
> suggestion is that we somehow modify our workflow allowing these small
> features to get through FF stage or we are risking to have an endless queue
> of enhancements that users will never see in the release.
>
>
> On Thu, Jun 26, 2014 at 8:07 PM, Matthew Mosesohn 
> wrote:
>
>> +1
>>
>> Keeping features separate as blueprints (even tiny ones with no spec)
>> really will let us focus on the volume of real bugs.
>>
>> On Tue, Jun 24, 2014 at 5:14 PM, Dmitry Pyzhov 
>> wrote:
>> > Guys,
>> >
>> > We have a beautiful contribution guide:
>> > https://wiki.openstack.org/wiki/Fuel/How_to_contribute
>> >
>> > However, I would like to address several issues in our blueprints/bugs
>> > processes. Let's discuss and vote on my proposals.
>> >
>> > 1) First of all, the bug counter is an excellent metric for quality. So
>> > let's use it only for bugs and track all feature requirement as
>> blueprints.
>> > Here is what it means:
>> >
>> > 1a) If a bug report does not describe a user’s pain, a blueprint should
>> be
>> > created and bug should be closed as invalid
>> > 1b) If a bug report does relate to a user’s pain, a blueprint should be
>> > created and linked to the bug
>> > 1c) We have an excellent reporting tool, but it needs more metrics:
>> count of
>> > critical/high bugs, count of bugs assigned to each team. It will require
>> > support of team members lists, but it seems that we really need it.
>> >
>> >
>> > 2) We have a huge amount of blueprints and it is hard to work with this
>> > list. A good blueprint needs a fixed scope, spec review and acceptance
>> > criteria. It is obvious for me that we can not work on blueprints that
>> do
>> > not meet these requirements. Therefore:
>> >
>> > 2a) Let's copy the nova future series and create a fake milestone
>> 'next' as
>> > nova does. All unclear blueprints should be moved there. We will pick
>> > blueprints from there, add spec and other info and target them to a
>> > milestone when we are really ready to work on a particular blueprint.
>> Our
>> > release page will look much more close to reality and much more
>> readable in
>> > this case.
>> > 2b) Each blueprint in a milestone should contain information about
>> feature
>> > lead, design reviewers, developers, qa, acceptance criteria. Spec is
>> > optional for trivial blueprints. If a spec is created, the designated
>> > reviewer(s) should put (+1) right into the blueprint description.
>> > 2c) Every blueprint spec should be updated before feature freeze with
>> the
>> > latest actual information. Actually, I'm not sure if we care about spec
>> > after feature development, but it seems to be logical to have correct
>> > information in specs.
>> > 2d) We should avoid creating interconnected blueprints wherever
>> possible. Of
>> > course we can have several blueprints for one big feature if it can be
>> split
>> > into several shippable blocks for several releases or for several
>> teams. In
>> > most cases, small parts should be tracked as work items of a single
>> > blueprint.
>> >
>> >
>> > 3) Every review request without a bug or blueprint link should be
>> checked
>> > carefully.
>> >
>> > 3a) It should contain a complete description of what is being done and
>> why
>> > 3b) It should not require backports to stable branches (backports are
>> > bugfixes only)
>> > 3c) It should not require changes to documentation or be mentioned in
>> > release notes
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04

[openstack-dev] [Murano] Field 'name' is removed from Apps dynamic UI markup, should 'Version' be changed?

2014-07-04 Thread Timur Sufiev
Hi, folks!

Recently we had decided to change a bit how Murano's dynamic UI works,
namely do not explicitly specify 'name' field in first 'Add
Application' form, but add it here automatically, since every
component in Murano has a name. To avoid confusion with the 'name'
field added by hand to the first form's markup, 'name' field on the
first step will be forbidden and processing of an old UI markup which
has such field will cause an exception. All these changes are
described in the blueprint [1] in a greater detail.

What is not entirely clear to me is whether should we increase
'Version' attribute of UI markup or not? On one hand, the format of UI
markup is definitely changing - and old UI definitions won't work with
the UI processor after [1] is implemented. It is quite reasonable to
bump a format's version to reflect that fact. On the other hand, we
will hardly support both format versions, instead we'll rewrite UI
markup in all existing Murano Apps (there are not so many of them yet)
and eventually forget that once upon a time the user needed to specify
'name' field explicitly.

What do you think?

[1] 
https://blueprints.launchpad.net/murano/+spec/dynamic-ui-specify-no-explicit-name-field

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How introduction of SDN controllers impact OpenStack

2014-07-04 Thread jcsf
Hi Thulasi,

 

Your question is a very good one.One that is very welcome for a long 
conversation.

 

The answer is that not all SDN controllers are solely based on openflow.   The 
Neutron API (not code) provides contract for SDN controller to provision the 
network on behalf of Nova requests and other parts of OpenStack.  This is how 
it Neutron was conceived/designed and this is how SDN controller developers 
think about the demarcation.   Provisioning multi-vendor network equipment in a 
multi-technology world is a difficult thing to accomplish.   This is why no one 
technology solution will work for all cases.

 

SDN controller developers specialize in thinking about this problem and use a 
wide range of technologies to accomplish this.   OpenFlow is very effective at 
end-points (OVS) but is not proven as a mid-point substrate as of yet.  
Traditional technologies – VLANs, IP/OSPF/BGP, MPLS, Optical and software 
subsystems (e.g. proprietary EMS’s) make up the midpoint landscape.  
Accessing/provisioning these midpoint requires a multitude of technologies 
spanning the control-plane and management-plane (e.g.  netconf/yang, tl1, cli,  
vendor-specific proprietary APis).   The job of the SDN controller is to 
service the contract provided by Neutron by ensuring the end-to-end connection 
is available for Nova.   How this is accomplished is subject to the actual 
topology and equipment used.

 

Also, it is perhaps better for the Neutron code to be developed with this in 
mind.  Pluggable and modular code is more effective that dictating a particular 
solution/situation.   Allowing the Controller plugin to create the OVS ports 
and other EMS function is better than dictating that the neutron code always do 
it.   Of course, neutron should provide the modules to create the port 
(re-usable code) but not harden where it is called in the call-chain.

 

Hope this answers your question – and provokes discussion. 

 

 

 

 

 

From: Thulasi ram Valleru [mailto:thulasiram.vall...@gmail.com] 
Sent: Friday, July 4, 2014 6:57 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] How introduction of SDN controllers impact OpenStack

 

Guys,

What tasks SDN controller plugin do. Consider we are not installing ML2 
plugin, we have only one plugin ie, SDN controller one on Neutron. 

 

 SDN plugin can manage SDN controller and ovs on hypervisor. Neutron is 
now able to create a port on Hypervisor and gives the information to Neutron 
server. Neutron server gives these details to SDn controller. SDN controller 
now knows which port belong to which physical server. here i am able to see 
only thing SDN controller can do is update the flow tables on ovs.If open flow 
physical devices are there in network, then it can gather information like 
which port is attached to which physical device on physical server. So it can 
install flows on physical network devices also.

 

 What about physical devices which doesn't support open flow. how does 
SDn controller take care of network provisioning by neutron.

 

 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] autosync incubator to projects

2014-07-04 Thread Mark McLoughlin
On Fri, 2014-07-04 at 15:31 +0200, Ihar Hrachyshka wrote:
> Hi all,
> at the moment we have several bot jobs that sync contents to affected
> projects:
> 
> - translations are copied from transifex;
> - requirements are copied from global requirements repo.
> 
> We have another source of common code - oslo-incubator, though we
> still rely on people manually copying the new code from there to
> affected projects. This results in old, buggy, and sometimes
> completely different versions of the same code in all projects.
> 
> I wonder why don't we set another bot to sync code from incubator? In
> that way, we would:
> - reduce work to do for developers [I hope everyone knows how boring
> it is to fill in commit message with all commits synchronized and
> create sync requests for > 10 projects at once];
> - make sure all projects use (almost) the same code;
> - ensure projects are notified in advance in case API changed in one
> of the modules that resulted in failures in gate;
> - our LOC statistics will be a bit more fair ;) (currently, the one
> who syncs a large piece of code from incubator to a project, gets all
> the LOC credit at e.g. stackalytics.com).
> 
> The changes will still be gated, so any failures and incompatibilities
> will be caught. I even don't expect most of sync requests to fail at
> all, meaning it will be just a matter of two +2's from cores.
> 
> I know that Oslo team works hard to graduate lots of modules from
> incubator to separate libraries with stable API. Still, I guess we'll
> live with incubator at least another cycle or two.
> 
> What are your thoughts on that?

Just repeating what I said on IRC ...

The point of oslo-incubator is that it's a place where APIs can be
cleaned up so that they are ready for graduation. Code living in
oslo-incubator for a long time with unchanging APIs is not the idea. An
automated sync job would IMHO discourage API cleanup work. I'd expect
people would start adding lots of ugly backwards API compat hacks with
their API cleanups just to stop people complaining about failing
auto-syncs. That would be the opposite of what we're trying to achieve.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] autosync incubator to projects

2014-07-04 Thread Duncan Thomas
If the bot did sensible things (one module at a time sync plus deps
where necessary, preferably using dependant patches where possible for
deps, and commit messages that list all of the individual commits that
are being synced), I think this could be great. It is quite tricky to
figure out the last sync point automatically in order to list all of
the commits pulled in however - and some projects have (had?) local
commits on top of the imports that made things even more complex...

On 4 July 2014 14:31, Ihar Hrachyshka  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Hi all,
> at the moment we have several bot jobs that sync contents to affected
> projects:
>
> - - translations are copied from transifex;
> - - requirements are copied from global requirements repo.
>
> We have another source of common code - oslo-incubator, though we
> still rely on people manually copying the new code from there to
> affected projects. This results in old, buggy, and sometimes
> completely different versions of the same code in all projects.
>
> I wonder why don't we set another bot to sync code from incubator? In
> that way, we would:
> - - reduce work to do for developers [I hope everyone knows how boring
> it is to fill in commit message with all commits synchronized and
> create sync requests for > 10 projects at once];
> - - make sure all projects use (almost) the same code;
> - - ensure projects are notified in advance in case API changed in one
> of the modules that resulted in failures in gate;
> - - our LOC statistics will be a bit more fair ;) (currently, the one
> who syncs a large piece of code from incubator to a project, gets all
> the LOC credit at e.g. stackalytics.com).
>
> The changes will still be gated, so any failures and incompatibilities
> will be caught. I even don't expect most of sync requests to fail at
> all, meaning it will be just a matter of two +2's from cores.
>
> I know that Oslo team works hard to graduate lots of modules from
> incubator to separate libraries with stable API. Still, I guess we'll
> live with incubator at least another cycle or two.
>
> What are your thoughts on that?
>
> /Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQEcBAEBCgAGBQJTtqydAAoJEC5aWaUY1u579D8H/15KrB2Sqd+nul0DNPj7U1Tt
> Rt7dvFzaUDXOA7qbEHvKudoJcuPQfss7etttZtlX75xMC/AQ2+bVlZBjCwy23ZaQ
> OsvzbRyHIVC+2nOnU3nfwxJByRRij9DeWcEazrXX9QDCw0bq9m1BI6TXGMaROCSa
> edNpp5tuwaLG8910tKqK5yB7F2i1USIWKPNa1ZBArqco350ULLPPF28z1InGgWZn
> 8ipFvArbKEXyPQotQFPzH58KEsdAR/h7BMUXZ+6c0hRLFb/Vel4q85Tl8lnoB0yJ
> MMXvr09DTzMkwXXr3sowqlDCpaAF0dGccBbdaLNB45NMa6Z1Gx5Gp9etWKisdOc=
> =w51L
> -END PGP SIGNATURE-
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] One more lifecycle plug point - in scaling groups

2014-07-04 Thread Chris Friesen

On 07/03/2014 10:13 PM, Mike Spreitzer wrote:


I do think the issue these address --- the need to get application logic
involved in, e.g., shutdown --- is most of what an application needs;
involvement in selection of which member(s) to delete is much less
important (provided that clean shutdown mechanism prevents concurrent
shutdowns).


I assume this is more of the whole "cattle" model, where an instance 
could disappear at any time so applications should design for that?


As an alternate viewpoint, if a particular instance in a group is 
working on something "expensive" (long-running, difficult to checkpoint, 
etc.), maybe it would make sense to allow the application to help make 
the decision on which instance to shut down (or possibly even veto/delay 
the scale down operation).  If it takes a minute to finish the 
operation, and 15 minutes to redo it on another instance...


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How introduction of SDN controllers impact OpenStack

2014-07-04 Thread Thulasi ram Valleru
Guys,
What tasks SDN controller plugin do. Consider we are not installing
ML2 plugin, we have only one plugin ie, SDN controller one on Neutron.

 SDN plugin can manage SDN controller and ovs on hypervisor.
Neutron is now able to create a port on Hypervisor and gives the
information to Neutron server. Neutron server gives these details to SDn
controller. SDN controller now knows which port belong to which physical
server. here i am able to see only thing SDN controller can do is update
the flow tables on ovs.If open flow physical devices are there in network,
then it can gather information like which port is attached to which
physical device on physical server. So it can install flows on physical
network devices also.

 What about physical devices which doesn't support open flow. how
does SDn controller take care of network provisioning by neutron.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Use MariaDB by default on Fedora

2014-07-04 Thread Giulio Fidente

On 07/01/2014 05:47 PM, Michael Kerrin wrote:

I propose making mysql an abstract element and user must choose either
percona or mariadb-rpm element.CI must be setup correctly


+1

seems a cleaner and more sustainable approach
--
Giulio Fidente
GPG KEY: 08D733BA

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][all] autosync incubator to projects

2014-07-04 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi all,
at the moment we have several bot jobs that sync contents to affected
projects:

- - translations are copied from transifex;
- - requirements are copied from global requirements repo.

We have another source of common code - oslo-incubator, though we
still rely on people manually copying the new code from there to
affected projects. This results in old, buggy, and sometimes
completely different versions of the same code in all projects.

I wonder why don't we set another bot to sync code from incubator? In
that way, we would:
- - reduce work to do for developers [I hope everyone knows how boring
it is to fill in commit message with all commits synchronized and
create sync requests for > 10 projects at once];
- - make sure all projects use (almost) the same code;
- - ensure projects are notified in advance in case API changed in one
of the modules that resulted in failures in gate;
- - our LOC statistics will be a bit more fair ;) (currently, the one
who syncs a large piece of code from incubator to a project, gets all
the LOC credit at e.g. stackalytics.com).

The changes will still be gated, so any failures and incompatibilities
will be caught. I even don't expect most of sync requests to fail at
all, meaning it will be just a matter of two +2's from cores.

I know that Oslo team works hard to graduate lots of modules from
incubator to separate libraries with stable API. Still, I guess we'll
live with incubator at least another cycle or two.

What are your thoughts on that?

/Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTtqydAAoJEC5aWaUY1u579D8H/15KrB2Sqd+nul0DNPj7U1Tt
Rt7dvFzaUDXOA7qbEHvKudoJcuPQfss7etttZtlX75xMC/AQ2+bVlZBjCwy23ZaQ
OsvzbRyHIVC+2nOnU3nfwxJByRRij9DeWcEazrXX9QDCw0bq9m1BI6TXGMaROCSa
edNpp5tuwaLG8910tKqK5yB7F2i1USIWKPNa1ZBArqco350ULLPPF28z1InGgWZn
8ipFvArbKEXyPQotQFPzH58KEsdAR/h7BMUXZ+6c0hRLFb/Vel4q85Tl8lnoB0yJ
MMXvr09DTzMkwXXr3sowqlDCpaAF0dGccBbdaLNB45NMa6Z1Gx5Gp9etWKisdOc=
=w51L
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-07-04 Thread Anita Kuno
On 07/04/2014 08:11 AM, Duncan Thomas wrote:
> On 2 July 2014 16:11, Anita Kuno  wrote:
>> Hmmm, my first response - given that long chew we had on the ml[0] about
>> the use of the word certified as well as the short confirmation we had
>> in the tc meeting[1] that the word certified would not be used, but
>> rather some version of the word 'tested' - how long until edits can be
>> made to the cinder wiki to comply with that agreement?
> 
> It's a wiki - anybody who cares enough can go and re-word it...
> 
> 
> 
True, but it is a cinder wiki. I don't know what words you want to use
and don't want to lose the intent of the page.

Also given that some folks in other places in cinder are still using
references to certified I think it shows a strong message if someone in
a leadership position in cinder makes the edits.

Thanks Duncan,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] Meeting Reminder - July 7, 17:00 UTC

2014-07-04 Thread Jaromir Coufal

Hi UXers,

this is a reminder that our next regular IRC meeting is happening on 
Monday, July 7th at 17:00 UTC at #openstack-meeting-3.


Agenda: https://wiki.openstack.org/wiki/Meetings/UX
Feel free to add topics which you are interested in.

See you all there
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Audit Log

2014-07-04 Thread Boris Pavlovic
Noorul,

So about Logging, first thing that we should resolve is to have a standard
way to deal with logs.


1) E.g. logging as a service (https://github.com/stackforge/logaas), that
will tell to all services where to write logs, and present unified query
API (independent from backend e.g. logstash).

2) There should be a big work on unifying logs to be able to analyze

3) We shouldn't use LOGs for profiling (cause putting all calls to DB to
logs, will just kill any LOG storage system). But no worries for profiling
there will be another solution (I hope soon):
https://github.com/stackforge/osprofiler

Best regards,
Boris Pavlovic






On Fri, Jul 4, 2014 at 3:56 PM, Noorul Islam K M  wrote:

>
> Hello all,
>
> I was looking for audit logs in nova. I found [1] but could not find the
> launchpad entry audit-logging as mentioned in the wiki page.
>
> Is this yet to be implemented or am I looking at the wrong place?
>
> Regards,
> Noorul
>
> [1] https://wiki.openstack.org/wiki/AuditLogging
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins] [Cinder] InvocationError in gate-cinder-python26 & python27

2014-07-04 Thread Amit Das
Perhaps Jenkins can display a better message to start with.

Being new to openstack, i had no clue on what was happening. However, the
fix was simple once understood.

The current message made me to think something was wrong with jenkins
environment setup.



Regards,
Amit
*CloudByte Inc.* 


On Fri, Jul 4, 2014 at 5:39 PM, Duncan Thomas 
wrote:

> On 30 June 2014 07:47, Steve Kowalik  wrote:
> > Personally, I think generating and comparing a sample config every build
> > is daft, and a sample configuration should be generated during sdist or
> > something.
>
> This argument has gone back and forth several times.
>
> There is definite value to a reviewer in seeing the sample conf
> changes - it is usually the first thing I look for with a change, to
> get a feel what what has been done and look for obvious back
> compatibility issue, of which there have been lots.
>
> Gating on it in its current form is clearly broken since it doesn't
> take into account the libraries from OSLO might have changed things.
> Ideally jenkins would, after installing all the deps, git checkout the
> parent, generate the conf, checkout the change, generate the conf and
> post up the diffs, but coding that up is tricky.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 3rd Party CI vs. Gerrit

2014-07-04 Thread Duncan Thomas
On 2 July 2014 16:11, Anita Kuno  wrote:
> Hmmm, my first response - given that long chew we had on the ml[0] about
> the use of the word certified as well as the short confirmation we had
> in the tc meeting[1] that the word certified would not be used, but
> rather some version of the word 'tested' - how long until edits can be
> made to the cinder wiki to comply with that agreement?

It's a wiki - anybody who cares enough can go and re-word it...



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins] [Cinder] InvocationError in gate-cinder-python26 & python27

2014-07-04 Thread Duncan Thomas
On 30 June 2014 07:47, Steve Kowalik  wrote:
> Personally, I think generating and comparing a sample config every build
> is daft, and a sample configuration should be generated during sdist or
> something.

This argument has gone back and forth several times.

There is definite value to a reviewer in seeing the sample conf
changes - it is usually the first thing I look for with a change, to
get a feel what what has been done and look for obvious back
compatibility issue, of which there have been lots.

Gating on it in its current form is clearly broken since it doesn't
take into account the libraries from OSLO might have changed things.
Ideally jenkins would, after installing all the deps, git checkout the
parent, generate the conf, checkout the change, generate the conf and
post up the diffs, but coding that up is tricky.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][oslo] telemetry_notification_api.py timeouts

2014-07-04 Thread Doug Hellmann
I'm trying to land a patch to the global requirements list and it is
failing on a
telemetry test timeout (bug 1336755). The result is we're blocked on having
anyone adopt oslo.i18n.

Is there anything I might be able to do to help with reproducing the
problem or diagnosing it?

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins] [Cinder] InvocationError in gate-cinder-python26 & python27

2014-07-04 Thread Amit Das
Thanks a lot.

The recommendations worked fine.

Regards,
Amit
*CloudByte Inc.* 


On Fri, Jul 4, 2014 at 3:34 PM, Yuriy Taraday  wrote:

> On Fri, Jul 4, 2014 at 12:57 PM, Amit Das  wrote:
>
>> Hi All,
>>
>> I can see a lot of cinder gerrit commits that pass through the
>> gate-cinder-python26 & gate-cinder-python27 successfully.
>>
>> ref - https://github.com/openstack/cinder/commits/master
>>
>> Whereas its not the case for my patch
>> https://review.openstack.org/#/c/102511/.
>>
>> I updated the master & rebased that to my branch before doing a gerrit
>> review.
>>
>> Am i missing any steps ?
>>
>
> Does 'tox -e py26' works on your local machine? It should fail just as one
> in the gate.
> You should follow instructions it provides in log just before
> 'InvocationError' - run tools/config/generate_sample.sh.
> The issue is that you've added some options to your driver but didn't
> update etc/cinder/cinder.conf.sample.
> After generating new sample you should verify its diff (git diff
> etc/cinder/cinder.conf.sample) and add it to your commit.
>
>
> --
>
> Kind regards, Yuriy.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Audit Log

2014-07-04 Thread Noorul Islam K M

Hello all,

I was looking for audit logs in nova. I found [1] but could not find the
launchpad entry audit-logging as mentioned in the wiki page.

Is this yet to be implemented or am I looking at the wrong place?

Regards,
Noorul

[1] https://wiki.openstack.org/wiki/AuditLogging

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Ceilometer] A proposal to enhance ceilometer alarm

2014-07-04 Thread Qiming Teng
Hi,

In current Alarm implementation, Ceilometer will send back Heat an
'alarm' using the pre-signed URL (or other channel under development).
The alarm carries a payload that looks like:

 {
   alarm_id: ID
   previous: ok
   current: alarm
   reason: transision to alarm due to n samples outside thredshold,
   most recent:  
   reason_data: {
 type: threshold
 disposition: inside
 count: x
 most_recent: value
   }
 }

While this data structure is useful for some simple use cases, it can be
enhanced to carry more useful data.  Some usage scenarios are:

 - When a member of AutoScalingGroup is dead (e.g. accidently deleted),
   Ceilometer can detect this from a event with count='instance',
   event_type='compute.instance.delete.end'.  If an alarm created out of
   this event, the AutoScalingGroup may have a chance to recover the
   member when appropriate.  The requirement is for this Alarm to tell
   Heat which instance is dead.
 - When a VM connected to multiple subnets is experiencing bandwidth
   problem, an alarm can be generated telling Heat which subnet is to be
   checked.

We believe there will be many other use cases expecting an alarm to
carry some 'useful' information beyond just a state transition. Below is
a proposal to solve this.  Any comments are welcomed.

1. extend the alarm with an optional parameter, say, 'output', which is
   a map or an equivalent representation.  A user can specify some
   key=value pairs using this parameter, where 'key' is a convenience
   for user and value is used to specify a field from a Sample whose
   value will be filled  in here.

   e.g. --output instance=metadata.instance_id;timestamp=timestamp

2. extend the Ceilometer alarm-evaluator service, so that when an alarm
   is seen requiring output values, it will try matching the 'value'
   specified above to the fields in a sample, and replace the output
   entry with 'key='.

   e.g. "output": { 
  "instance": "bd56bb53-d07f-49a6-8f60-6f8ef1336060",
  "timestamp": "2014-07-0102: 21: 13.002155",
}

   The above data is passed back to the alarm_url as part of its
   existing payload.

   If alarm-evaluator cannot find a matching field, it can fill in an
   empty string, or just "None".

3. extend the OS::Ceilometer::Alarm resource type in Heat so that an
   optional property (say, 'output') of type map can be used to specify
   what are expected from the Alarm.

Since it is an additional field in the 'details' argument, the impact to
existing Heat template/users will be negligible.  However, the
expressive power of carrying back additional fields would be a great
help to some scenarios we yet to know.

Because this is a cross-project proposal, comments from both communities
are valuable and thus appreciated.  If it is a viable approach, should
we raise two specs in both projects repectively?


Regards,
  - Qiming


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-07-04 Thread Anant Patil
I did some dhcpdump and I could see that the requests were coming
to the DHCP server and replies were sent.

I am not able to ping the VMs from anywhere, DHCP namespace or
Router namespace. I guess the problem is with interface being
configured with the supplied IP address.


On Fri, Jul 4, 2014 at 4:49 PM, Anant Patil  wrote:

> I am using Ubuntu 14.04 and running devstack from top of the tree.
> Everything goes fine, but I am not able to ping the instance IP
> addresses. I am not able to log into the VM using novnc, but I
> am sure the VM is not getting the IP Address.
>
>
> On Fri, Jul 4, 2014 at 4:47 PM, Anant Patil 
> wrote:
>
>> Paul, you need run the command as admin. If you are sourcing
>> openrc as demo tenant or something it will not list.
>>
>> However, I also face this issue of IP address not getting assigned.
>>
>>
>> On Fri, Jul 4, 2014 at 2:35 AM, Kyle Mestery 
>> wrote:
>>
>>> On Thu, Jul 3, 2014 at 10:14 AM, Paul Czarkowski
>>>  wrote:
>>> > I¹m seeing similar. Instances launch,  they show as having Ips in
>>> > `neutron list`  but I cannot access them via IP.
>>> >
>>> > Other thing I¹ve notices is that doing a `neutron agent-list` gives me
>>> an
>>> > empty list,  I would assume it should at least show the dhcp agent ?
>>> >
>>> Which plugin are you using? For ML2 with OVS or LB, you should have L2
>>> agents on each compute host in addition to the DHCP and L3 agents. I
>>> think perhaps your problem is different than Rob's.
>>>
>>> > On 7/1/14, 12:00 PM, "Kyle Mestery"  wrote:
>>> >
>>> >>Hi Rob:
>>> >>
>>> >>Can you try adding the following config to your local.conf? I'd like
>>> >>to see if this gets you going or not. It will force it to use gre
>>> >>tunnels for tenant networks. By default it will not.
>>> >>
>>> >>ENABLE_TENANT_TUNNELS=True
>>> >>
>>> >>On Tue, Jul 1, 2014 at 10:53 AM, Rob Crittenden 
>>> >>wrote:
>>> >>> Rob Crittenden wrote:
>>>  Mark Kirkwood wrote:
>>> > On 25/06/14 10:59, Rob Crittenden wrote:
>>> >> Before I get punted onto the operators list, I post this here
>>> because
>>> >> this is the default config and I'd expect the defaults to just
>>> work.
>>> >>
>>> >> Running devstack inside a VM with a single NIC configured and
>>> this in
>>> >> localrc:
>>> >>
>>> >> disable_service n-net
>>> >> enable_service q-svc
>>> >> enable_service q-agt
>>> >> enable_service q-dhcp
>>> >> enable_service q-l3
>>> >> enable_service q-meta
>>> >> enable_service neutron
>>> >> Q_USE_DEBUG_COMMAND=True
>>> >>
>>> >> Results in a successful install but no DHCP address assigned to
>>> >>hosts I
>>> >> launch and other oddities like no CIDR in nova net-list output.
>>> >>
>>> >> Is this still the default way to set things up for single node?
>>> It is
>>> >> according to https://wiki.openstack.org/wiki/NeutronDevstack
>>> >>
>>> >>
>>> >
>>> > That does look ok: I have an essentially equivalent local.conf:
>>> >
>>> > ...
>>> > ENABLED_SERVICES+=,-n-net
>>> >
>>> ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest
>>> >
>>> > I don't have 'neutron' specifically enabled... not sure if/why that
>>> > might make any difference tho. However instance launching and ip
>>> >address
>>> > assignment seem to work ok.
>>> >
>>> > However I *have* seen the issue of instances not getting ip
>>> addresses
>>> >in
>>> > single host setups, and it is often due to use of virt io with
>>> bridges
>>> > (with is the default I think). Try:
>>> >
>>> > nova.conf:
>>> > ...
>>> > libvirt_use_virtio_for_bridges=False
>>> 
>>>  Thanks for the suggestion. At least in master this was replaced by a
>>> new
>>>  section, libvirt, but even setting it to False didn't do the trick
>>> for
>>>  me. I see the same behavior.
>>> >>>
>>> >>> OK, I've tested the havana and icehouse branches in F-20 and they
>>> don't
>>> >>> seem to have a working neutron either. I see the same thing. I can
>>> >>> launch a VM but it isn't getting a DHCP address.
>>> >>>
>>> >>> Maybe I'll try in some Ubuntu release to see if this is
>>> Fedora-specific.
>>> >>>
>>> >>> rob
>>> >>>
>>> >>>
>>> >>> ___
>>> >>> OpenStack-dev mailing list
>>> >>> OpenStack-dev@lists.openstack.org
>>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >>
>>> >>___
>>> >>OpenStack-dev mailing list
>>> >>OpenStack-dev@lists.openstack.org
>>> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.o

Re: [openstack-dev] [DevStack] neutron config not working

2014-07-04 Thread Anant Patil
I am using Ubuntu 14.04 and running devstack from top of the tree.
Everything goes fine, but I am not able to ping the instance IP
addresses. I am not able to log into the VM using novnc, but I
am sure the VM is not getting the IP Address.


On Fri, Jul 4, 2014 at 4:47 PM, Anant Patil  wrote:

> Paul, you need run the command as admin. If you are sourcing
> openrc as demo tenant or something it will not list.
>
> However, I also face this issue of IP address not getting assigned.
>
>
> On Fri, Jul 4, 2014 at 2:35 AM, Kyle Mestery 
> wrote:
>
>> On Thu, Jul 3, 2014 at 10:14 AM, Paul Czarkowski
>>  wrote:
>> > I¹m seeing similar. Instances launch,  they show as having Ips in
>> > `neutron list`  but I cannot access them via IP.
>> >
>> > Other thing I¹ve notices is that doing a `neutron agent-list` gives me
>> an
>> > empty list,  I would assume it should at least show the dhcp agent ?
>> >
>> Which plugin are you using? For ML2 with OVS or LB, you should have L2
>> agents on each compute host in addition to the DHCP and L3 agents. I
>> think perhaps your problem is different than Rob's.
>>
>> > On 7/1/14, 12:00 PM, "Kyle Mestery"  wrote:
>> >
>> >>Hi Rob:
>> >>
>> >>Can you try adding the following config to your local.conf? I'd like
>> >>to see if this gets you going or not. It will force it to use gre
>> >>tunnels for tenant networks. By default it will not.
>> >>
>> >>ENABLE_TENANT_TUNNELS=True
>> >>
>> >>On Tue, Jul 1, 2014 at 10:53 AM, Rob Crittenden 
>> >>wrote:
>> >>> Rob Crittenden wrote:
>>  Mark Kirkwood wrote:
>> > On 25/06/14 10:59, Rob Crittenden wrote:
>> >> Before I get punted onto the operators list, I post this here
>> because
>> >> this is the default config and I'd expect the defaults to just
>> work.
>> >>
>> >> Running devstack inside a VM with a single NIC configured and this
>> in
>> >> localrc:
>> >>
>> >> disable_service n-net
>> >> enable_service q-svc
>> >> enable_service q-agt
>> >> enable_service q-dhcp
>> >> enable_service q-l3
>> >> enable_service q-meta
>> >> enable_service neutron
>> >> Q_USE_DEBUG_COMMAND=True
>> >>
>> >> Results in a successful install but no DHCP address assigned to
>> >>hosts I
>> >> launch and other oddities like no CIDR in nova net-list output.
>> >>
>> >> Is this still the default way to set things up for single node? It
>> is
>> >> according to https://wiki.openstack.org/wiki/NeutronDevstack
>> >>
>> >>
>> >
>> > That does look ok: I have an essentially equivalent local.conf:
>> >
>> > ...
>> > ENABLED_SERVICES+=,-n-net
>> > ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest
>> >
>> > I don't have 'neutron' specifically enabled... not sure if/why that
>> > might make any difference tho. However instance launching and ip
>> >address
>> > assignment seem to work ok.
>> >
>> > However I *have* seen the issue of instances not getting ip
>> addresses
>> >in
>> > single host setups, and it is often due to use of virt io with
>> bridges
>> > (with is the default I think). Try:
>> >
>> > nova.conf:
>> > ...
>> > libvirt_use_virtio_for_bridges=False
>> 
>>  Thanks for the suggestion. At least in master this was replaced by a
>> new
>>  section, libvirt, but even setting it to False didn't do the trick
>> for
>>  me. I see the same behavior.
>> >>>
>> >>> OK, I've tested the havana and icehouse branches in F-20 and they
>> don't
>> >>> seem to have a working neutron either. I see the same thing. I can
>> >>> launch a VM but it isn't getting a DHCP address.
>> >>>
>> >>> Maybe I'll try in some Ubuntu release to see if this is
>> Fedora-specific.
>> >>>
>> >>> rob
>> >>>
>> >>>
>> >>> ___
>> >>> OpenStack-dev mailing list
>> >>> OpenStack-dev@lists.openstack.org
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>___
>> >>OpenStack-dev mailing list
>> >>OpenStack-dev@lists.openstack.org
>> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-07-04 Thread Anant Patil
Paul, you need run the command as admin. If you are sourcing
openrc as demo tenant or something it will not list.

However, I also face this issue of IP address not getting assigned.


On Fri, Jul 4, 2014 at 2:35 AM, Kyle Mestery 
wrote:

> On Thu, Jul 3, 2014 at 10:14 AM, Paul Czarkowski
>  wrote:
> > I¹m seeing similar. Instances launch,  they show as having Ips in
> > `neutron list`  but I cannot access them via IP.
> >
> > Other thing I¹ve notices is that doing a `neutron agent-list` gives me an
> > empty list,  I would assume it should at least show the dhcp agent ?
> >
> Which plugin are you using? For ML2 with OVS or LB, you should have L2
> agents on each compute host in addition to the DHCP and L3 agents. I
> think perhaps your problem is different than Rob's.
>
> > On 7/1/14, 12:00 PM, "Kyle Mestery"  wrote:
> >
> >>Hi Rob:
> >>
> >>Can you try adding the following config to your local.conf? I'd like
> >>to see if this gets you going or not. It will force it to use gre
> >>tunnels for tenant networks. By default it will not.
> >>
> >>ENABLE_TENANT_TUNNELS=True
> >>
> >>On Tue, Jul 1, 2014 at 10:53 AM, Rob Crittenden 
> >>wrote:
> >>> Rob Crittenden wrote:
>  Mark Kirkwood wrote:
> > On 25/06/14 10:59, Rob Crittenden wrote:
> >> Before I get punted onto the operators list, I post this here
> because
> >> this is the default config and I'd expect the defaults to just work.
> >>
> >> Running devstack inside a VM with a single NIC configured and this
> in
> >> localrc:
> >>
> >> disable_service n-net
> >> enable_service q-svc
> >> enable_service q-agt
> >> enable_service q-dhcp
> >> enable_service q-l3
> >> enable_service q-meta
> >> enable_service neutron
> >> Q_USE_DEBUG_COMMAND=True
> >>
> >> Results in a successful install but no DHCP address assigned to
> >>hosts I
> >> launch and other oddities like no CIDR in nova net-list output.
> >>
> >> Is this still the default way to set things up for single node? It
> is
> >> according to https://wiki.openstack.org/wiki/NeutronDevstack
> >>
> >>
> >
> > That does look ok: I have an essentially equivalent local.conf:
> >
> > ...
> > ENABLED_SERVICES+=,-n-net
> > ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest
> >
> > I don't have 'neutron' specifically enabled... not sure if/why that
> > might make any difference tho. However instance launching and ip
> >address
> > assignment seem to work ok.
> >
> > However I *have* seen the issue of instances not getting ip addresses
> >in
> > single host setups, and it is often due to use of virt io with
> bridges
> > (with is the default I think). Try:
> >
> > nova.conf:
> > ...
> > libvirt_use_virtio_for_bridges=False
> 
>  Thanks for the suggestion. At least in master this was replaced by a
> new
>  section, libvirt, but even setting it to False didn't do the trick for
>  me. I see the same behavior.
> >>>
> >>> OK, I've tested the havana and icehouse branches in F-20 and they don't
> >>> seem to have a working neutron either. I see the same thing. I can
> >>> launch a VM but it isn't getting a DHCP address.
> >>>
> >>> Maybe I'll try in some Ubuntu release to see if this is
> Fedora-specific.
> >>>
> >>> rob
> >>>
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>___
> >>OpenStack-dev mailing list
> >>OpenStack-dev@lists.openstack.org
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][devstack-gate] mad rechecks

2014-07-04 Thread Martin Geisler
"Sergey Skripnick"  writes:

> After 2 days and 11 rechecks I finally got +1
>
> https://review.openstack.org/#/c/104123/
>
> failed jobs:
>
> job times
> check-dg-tempest-dsvm-full  1
> check-dg-tempest-dsvm-full-reexec   3
> check-grenade-dsvm  3
> check-grenade-dsvm-partial-ncpu 3
> check-tempest-dsvm-full-havana  1
> check-tempest-dsvm-neutron  1
> check-tempest-dsvm-neutron-heat-slow1
> check-tempest-dsvm-postgres-full1
> check-tempest-dsvm-postgres-full-icehouse   1
>
>
> Is there any hope that jobs will just work? Such number of failures
> leads to significant amount of extra work for test nodes.

I recently submitted some patches that cleaned up the comment headers of
Python files. Since I was editing a comment, I was surprised to see test
failures. I counted 12 failed and 61 successful test runs -- about 16%
of the test runs failed for no good reason.

-- 
Martin Geisler

http://google.com/+MartinGeisler


pgp10BD5WPkym.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova bug scrub web page

2014-07-04 Thread Jian Wen
Awesome!

Any plan to add the neutron project?



2014-07-04 5:00 GMT+08:00 Tracy Jones :

>  Hi Folks – I have taken a script from the infra folks and jogo, made
> some tweaks and have put it into a web page.  Please see it here
> http://54.201.139.117/demo.html
>
>
>  This is all of the new, confirmed, triaged, and in progress bugs that we
> have in nova as of a couple of hours ago.  I have added ways to search it,
> sort it, and filter it based on
>
>  1.  All bugs
> 2.  Bugs that have not been updated in the last 30 days
> 3.  Bugs that have never been updated
> 4.  Bugs in progress
> 5.  Bugs without owners.
>
>
>  I chose this as they are things I was interested in seeing, but there
> are obviously a lot of other things I can do here.  I plan on adding a cron
> job to update the data ever hour or so.  Take a look and let me know if
> your have feedback.
>
>  Tracy
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best,

Jian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][devstack-gate] mad rechecks

2014-07-04 Thread Sergey Skripnick


After 2 days and 11 rechecks I finally got +1

https://review.openstack.org/#/c/104123/

failed jobs:

job times
check-dg-tempest-dsvm-full  1
check-dg-tempest-dsvm-full-reexec   3
check-grenade-dsvm  3
check-grenade-dsvm-partial-ncpu 3
check-tempest-dsvm-full-havana  1
check-tempest-dsvm-neutron  1
check-tempest-dsvm-neutron-heat-slow1
check-tempest-dsvm-postgres-full1
check-tempest-dsvm-postgres-full-icehouse   1


Is there any hope that jobs will just work? Such number of failures leads  
to

significant amount of extra work for test nodes.


--
Regards,
Sergey Skripnick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] why use domain destroy instead of shutdown?

2014-07-04 Thread Day, Phil
Hi Melanie,

I have a BP (https://review.openstack.org/#/c/89650) and the first couple of 
bits of implementation (https://review.openstack.org/#/c/68942/  
https://review.openstack.org/#/c/99916/) out for review on this very topic ;-)

Phil

> -Original Message-
> From: melanie witt [mailto:melw...@outlook.com]
> Sent: 04 July 2014 03:13
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [nova][libvirt] why use domain destroy instead of
> shutdown?
> 
> Hi all,
> 
> I noticed in nova/virt/libvirt/driver.py we use domain destroy instead of
> domain shutdown in most cases (except for soft reboot). Is there a special
> reason we don't use shutdown to do a graceful shutdown of the guest for
> the stop, shelve, migrate, etc functions? Using destroy can corrupt the guest
> file system.
> 
> Thanks,
> Melanie

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins] [Cinder] InvocationError in gate-cinder-python26 & python27

2014-07-04 Thread Yuriy Taraday
On Fri, Jul 4, 2014 at 12:57 PM, Amit Das  wrote:

> Hi All,
>
> I can see a lot of cinder gerrit commits that pass through the
> gate-cinder-python26 & gate-cinder-python27 successfully.
>
> ref - https://github.com/openstack/cinder/commits/master
>
> Whereas its not the case for my patch
> https://review.openstack.org/#/c/102511/.
>
> I updated the master & rebased that to my branch before doing a gerrit
> review.
>
> Am i missing any steps ?
>

Does 'tox -e py26' works on your local machine? It should fail just as one
in the gate.
You should follow instructions it provides in log just before
'InvocationError' - run tools/config/generate_sample.sh.
The issue is that you've added some options to your driver but didn't
update etc/cinder/cinder.conf.sample.
After generating new sample you should verify its diff (git diff
etc/cinder/cinder.conf.sample) and add it to your commit.


-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Jenkins] [Cinder] InvocationError in gate-cinder-python26 & python27

2014-07-04 Thread Amit Das
Hi All,

I can see a lot of cinder gerrit commits that pass through the
gate-cinder-python26 & gate-cinder-python27 successfully.

ref - https://github.com/openstack/cinder/commits/master

Whereas its not the case for my patch
https://review.openstack.org/#/c/102511/.

I updated the master & rebased that to my branch before doing a gerrit
review.

Am i missing any steps ?

Regards,
Amit
*CloudByte Inc.* 


On Mon, Jun 30, 2014 at 12:17 PM, Steve Kowalik 
wrote:

> On 30/06/14 16:37, Amit Das wrote:
> > I have been facing below issues at gate-cinder-python26 &
> > gate-cinder-python27 after uploading my patch.
> >
> > I assume this to be an infrastructure issue than an issue with my patch.
> > Can someone please confirm ?
> >
> > .
> > 2014-06-30 05:41:57.704 | check_uptodate.sh: cinder.conf.sample is not
> > up to date.
>
> This usually means that Oslo (the common libraries used by many
> projects) has added an configuration option. It's in fact a problem with
> cinder, in that they need to make sure the sample configuration file is
> up to date, but it will affect all patches until it is fixed. I'm sure
> it will get sorted out quickly.
>
> Personally, I think generating and comparing a sample config every build
> is daft, and a sample configuration should be generated during sdist or
> something.
>
> Cheers,
> --
> Steve
> "Stop breathing down my neck!"
> "My breathing is merely a simulation."
> "So is my neck! Stop it anyway."
>  - EMH vs EMH, USS Prometheus
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-04 Thread Julien Danjou
On Thu, Jul 03 2014, Mark McLoughlin wrote:

> We're attempting to take baby-steps towards moving completely from
> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> first victim.

Thumbs up for the plan, that sounds like a good approach from what I
got. I just think there's a lot of things that are going to be
synchronous anyway because not everything provide a asynchronous
alternative (i.e. SQLAlchemy or requests don't yet AFAIK). It doesn't
worry me much as there nothing we can do on our side, except encourage
people to stop writing synchronous API¹.

And big +1 for using Ceilometer as a test bed. :)


¹  I'm sure you're familiar with Xlib vs XCB in this regard ;)

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-04 Thread Daniel P. Berrange
On Thu, Jul 03, 2014 at 03:30:06PM -0400, Russell Bryant wrote:
> On 07/03/2014 01:53 PM, Sylvain Bauza wrote:
> > Hi,
> > 
> > ==
> > tl; dr: A decision has been made to split out the scheduler to a
> > separate project not on a feature parity basis with nova-scheduler, your
> > comments are welcome.
> > ==
> 
> ...
> 
> > During the last Gantt meeting held Tuesday, we discussed about the
> > status and the problems we have. As we are close to Juno-2, there are
> > some concerns about which blueprints would be implemented by Juno, so
> > Gantt would be updated after. Due to the problems raised in the
> > different blueprints (please see the links there), it has been agreed to
> > follow a path a bit different from the one agreed at the Summit : once
> > B/ is merged, Gantt will be updated and work will happen in there while
> > work with C/ will happen in parallel. That means we need to backport in
> > Gantt all changes happening to the scheduler, but (and this is the most
> > important point) until C/ is merged into Gantt, Gantt won't support
> > filters which decide on aggregates or instance groups. In other words,
> > until C/ happens (but also A/), Gantt won't be feature-parity with
> > Nova-scheduler.
> > 
> > That doesn't mean Gantt will move forward and leave all missing features
> > out of it, we will be dedicated to feature-parity as top priority but
> > that implies that the first releases of Gantt will be experimental and
> > considered for testing purposes only.
> 
> I don't think this sounds like the best approach.  It sounds like effort
> will go into maintaining two schedulers instead of continuing to focus
> effort on the refactoring necessary to decouple the scheduler from Nova.
>  It's heading straight for a "nova-network and Neutron" scenario, where
> we're maintaining both for much longer than we want to.

Yeah, that's my immediate reaction too. I know it sounds like the Gantt
team are aiming todo the right thing by saying "feature-parity as the
top priority" but I'm concerned that this won't work out that way in
practice.

> I strongly prefer not starting a split until it's clear that the switch
> to the new scheduler can be done as quickly as possible.  That means
> that we should be able to start a deprecation and removal timer on
> nova-scheduler.  Proceeding with a split now will only make it take even
> longer to get there, IMO.
> 
> This was the primary reason the last gantt split was scraped.  I don't
> understand why we'd go at it again without finishing the job first.

Since Gantt is there primarily to serve Nova's needs, I don't see why
we need to rush into a split that won't actually be capable of serving
Nova needs, rather than waiting until the prerequisite work is ready. 

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-04 Thread Luke Gorrie
On 3 July 2014 19:02, Jay Pipes  wrote:

> devstack-gate works very well for what it is supposed to do:
>

Yeah, I would actually love to use devstack-gate.

I tried that first. There are two problems for me as a user:

First I didn't manage to get it up and running reliably in a reasonable
time frame (one week). In that time I was only starting to develop a mental
model of how to troubleshoot problems. Getting support on IRC is awkward
and especially so from my timezone (and especially-especially so while
being a dad to a newborn baby).

Second it exposes me to criticism for being lazy and/or incompetent because
people think it's very easy to setup. This easily escalates into threats to
delete all of my code from OpenStack for being a bad CI citizen, despite
the fact that from my perspective I am starting early and working hard.
(Havana was fun, and I'm writing working code, so how do I end up being
cast as a bad guy?)

Hacking up a custom CI in shell is pure desperation on my part because I
need something that I am able to operate and maintain.

Let's compare notes over a coffee in Paris :-).

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift 2.0.0 RC2 available for testing

2014-07-04 Thread Thierry Carrez
Hi everyone,

A new release candidate (RC2) for Swift 2.0.0 was cut to include three
small fixes: two are to account for cached data during rolling upgrades
and one handles extended storage policy reconciliation.

You can find the RC2 source code tarball at:
http://tarballs.openstack.org/swift/swift-2.0.0.rc2.tar.gz

Alternatively, you can access the proposed 2.0.0 code branch at:
http://git.openstack.org/cgit/openstack/swift/log/?h=proposed/2.0.0

These fixes are isolated issues that don't invalidate existing QA work,
so unless new release-critical issues are found, the plan is still to
release 2.0.0 final on Monday.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving neutron to oslo.db

2014-07-04 Thread Roman Podoliaka
Ben,

Neutron was updated to the latest version of db code from
oslo-incubator. That's probably all.

Thanks,
Roman

On Thu, Jul 3, 2014 at 8:10 PM, Ben Nemec  wrote:
> +27, -2401
>
> Wow, that's pretty painless.  Were there earlier patches to Neutron to
> prepare for the transition or was it really that easy?
>
> On 07/03/2014 07:34 AM, Salvatore Orlando wrote:
>> No I was missing everything and kept wasting time because of alembic.
>>
>> This will teach me to keep my mouth shut and don't distract people who are
>> actually doing good work.
>>
>> Thanks for doings this work.
>>
>> Salvatore
>>
>>
>> On 3 July 2014 14:15, Roman Podoliaka  wrote:
>>
>>> Hi Salvatore,
>>>
>>> I must be missing something. Hasn't it been done in
>>> https://review.openstack.org/#/c/103519/? :)
>>>
>>> Thanks,
>>> Roman
>>>
>>> On Thu, Jul 3, 2014 at 2:51 PM, Salvatore Orlando 
>>> wrote:
 Hi,

 As you surely now, in Juno oslo.db will graduate [1]
 I am currently working on the port. It's been already cleared that making
 alembic migrations "idempotent" and healing the DB schema is a
>>> requirement
 for this task.
 These two activities are tracked by the blueprints [2] and [3].
 I think we've seen enough in Openstack to understand that there is no
>>> chance
 of being able to do the port to oslo.db in Juno.

 While blueprint [2] is already approved, I suggest to target also [3] for
 Juno so that we might be able to port neutron to oslo.db as soon as K
>>> opens.
 I expect this port to be not as invasive as the one for oslo.messaging
>>> which
 required quite a lot of patches.

 Salvatore

 [1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [2] https://review.openstack.org/#/c/95738/
 [3] https://review.openstack.org/#/c/101963/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] DVR and FWaaS integration

2014-07-04 Thread Narasimhan, Vivekanandan
Hi Yi,



Swami will be available from this week.



Will it be possible for you to join the regular DVR Meeting (Wed 8AM PST) next 
week and we can slot that to discuss this.



I see that FwaaS is of much value for E/W traffic (which has challenges), but 
for me it looks easier to implement the same in N/S with the

current DVR architecture, but there might be less takers on that.



--

Thanks,



Vivek





From: Yi Sun [mailto:beyo...@gmail.com]
Sent: Thursday, July 03, 2014 11:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] DVR and FWaaS integration



The NS FW will be on a centralized node for sure. For the DVR + FWaaS solution 
is really for EW traffic. If you are interested on the topic, please propose 
your preferred meeting time and join the meeting so that we can discuss about 
it.

Yi

On 7/2/14, 7:05 PM, joehuang wrote:

   Hello,



   It’s hard to integrate DVR and FWaaS. My proposal is to split the FWaaS into 
two parts: one part is for east-west FWaaS, this part could be done on DVR 
side, and make it become distributed manner. The other part is for north-south 
part, this part could be done on Network Node side, that means work in central 
manner. After the split, north-south FWaaS could be implemented by software or 
hardware, meanwhile, east-west FWaaS is better to implemented by software with 
its distribution nature.



   Chaoyi Huang ( Joe Huang )

   OpenStack Solution Architect

   IT Product Line

   Tel: 0086 755-28423202 Cell: 0086 158 118 117 96 Email: 
joehu...@huawei.com

   Huawei Area B2-3-D018S Bantian, Longgang District,Shenzhen 518129, P.R.China



   发件人: Yi Sun [mailto:beyo...@gmail.com]
   发送时间: 2014年7月3日 4:42
   收件人: OpenStack Development Mailing List (not for usage questions)
   抄送: Kyle Mestery (kmestery); Rajeev; Gary Duan; Carl (OpenStack Neutron)
   主题: Re: [openstack-dev] DVR and FWaaS integration



   All,

   After talk to Carl and FWaaS team , Both sides suggested to call a meeting 
to discuss about this topic in deeper detail. I heard that Swami is traveling 
this week. So I guess the earliest time we can have a meeting is sometime next 
week. I will be out of town on monday, so any day after Monday should work for 
me. We can do either IRC, google hang out, GMT or even a face to face.

   For anyone interested, please propose your preferred time.

   Thanks

   Yi



   On Sun, Jun 29, 2014 at 12:43 PM, Carl Baldwin 
mailto:c...@ecbaldwin.net>> wrote:

   In line...

   On Jun 25, 2014 2:02 PM, "Yi Sun" 
mailto:beyo...@gmail.com>> wrote:
   >
   > All,
   > During last summit, we were talking about the integration issues between 
DVR and FWaaS. After the summit, I had one IRC meeting with DVR team. But after 
that meeting I was tight up with my work and did not get time to continue to 
follow up the issue. To not slow down the discussion, I'm forwarding out the 
email that I sent out as the follow up to the IRC meeting here, so that whoever 
may be interested on the topic can continue to discuss about it.
   >
   > First some background about the issue:
   > In the normal case, FW and router are running together inside the same box 
so that FW can get route and NAT information from the router component. And in 
order to have FW to function correctly, FW needs to see the both directions of 
the traffic.
   > DVR is designed in an asymmetric way that each DVR only sees one leg of 
the traffic. If we build FW on top of DVR, then FW functionality will be 
broken. We need to find a good method to have FW to work with DVR.
   >
   > ---forwarding email---
   >  During the IRC meeting, we think that we could force the traffic to the 
FW before DVR. Vivek had more detail; He thinks that since the br-int knowns 
whether a packet is routed or switched, it is possible for the br-int to 
forward traffic to FW before it forwards to DVR. The whole forwarding process 
can be operated as part of service-chain operation. And there could be a FWaaS 
driver that understands the DVR configuration to setup OVS flows on the br-int.

   I'm not sure what this solution would look like.  I'll have to get the 
details from Vivek.  It seems like this would effectively centralize the 
traffic that we worked so hard to decentralize.

   It did cause me to wonder about something:  would it be possible to reign 
the symmetry to the traffic by directing any response traffic back to the DVR 
component which handled the request traffic?  I guess this would require 
running conntrack on the target side to track and identify return traffic.  I'm 
not sure how this would be inserted into the data path yet.  This is a 
half-baked idea here.

   > The concern is that normally firewall and router are integrated together 
so that firewall can make right decision based on the routing result. But what 
we are suggesting is to split the firewall and router into two separated 
components, hence there could be issues. For example, FW will not be

Re: [openstack-dev] [neutron] Specs repo

2014-07-04 Thread Yuriy Taraday
Every commit landing to every repo should be synchronized to GitHub. I
filed a bug to track this issue here:
https://bugs.launchpad.net/openstack-ci/+bug/1337735


On Fri, Jul 4, 2014 at 3:30 AM, Salvatore Orlando 
wrote:

> git.openstack.org has an up-to-date log:
> http://git.openstack.org/cgit/openstack/neutron-specs/log/
>
> Unfortunately I don't know what the policy is for syncing repos with
> github.
>
> Salvatore
>
>
> On 4 July 2014 00:34, Sumit Naiksatam  wrote:
>
>> Is this still the right repo for this:
>> https://github.com/openstack/neutron-specs
>>
>> The latest commit on the master branch shows June 25th timestamp, but
>> we have had a lots of patches merging after that. Where are those
>> going?
>>
>> Thanks,
>> ~Sumit.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev