Re: [Openstack] How do you manage your openstack utilisation?

2017-07-23 Thread Mike Smith
One thing we do at Overstock.com to try and help with 
this concern in our dev and test environments is to require that an expiration 
date is associated with each Openstack resource.  As users spin up an 
environment, they have an expiration date such that if they don’t renew their 
‘lease’, their instances are suspended and then destroyed shortly thereafter.  
This expiration information is stored outside Openstack in the system that they 
interact with for requesting resources.

Doesn’t solve all your problems - since users can (and often do) just renew 
their environments all the time, but it does ensure that if people do not tell 
us that they are still actively using their resources, that they are removed.


On Jul 23, 2017, at 5:51 PM, Manuel Sopena Ballesteros 
> wrote:

Dear Openstack community,

We are a medical research institute and we have been running HPC for many 
years, we started playing with Openstack a few months ago and we like it’s 
flexibility to deploy multiple environments. However we are quite concern about 
the resource utilization, what I mean is that, in HPC the resources are 
released for the rest of the community once job has finished, however a VM 
keeps the resources for the owner of the vm until the instance is killed.

I would like to ask, how do you organize your resources used by Openstack to 
maximize utilization across the organization?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




CONFIDENTIALITY NOTICE: This message is intended only for the use and review of 
the individual or entity to which it is addressed and may contain information 
that is privileged and confidential. If the reader of this message is not the 
intended recipient, or the employee or agent responsible for delivering the 
message solely to the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this communication in error, please notify 
sender immediately by telephone or return email. Thank you.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [neutron] cannot list "default" security group with Neutron API?

2017-07-23 Thread Kevin Benton
Hi,

This sounds like it may be a bug. My guess is that when we switched to
project ID a hook was not updated to create the default security group when
a project ID is passed instead of a tenant ID (this logic [1] in
particular).

Can you please file a bug on launchpad and reference this email thread and
we should be able to get it fixed pretty quickly.

1.
https://github.com/openstack/neutron/blob/71d9aab87e37b5162ef09b8cbe3b72709fc88a8b/neutron/db/securitygroups_db.py#L146-L153

Cheers,
Kevin Benton

On Tue, Jun 27, 2017 at 3:30 AM, Riccardo Murri 
wrote:

> Hello,
>
> I'm trying to add some rules to the "default" security group of a
> newly-created project, using the Neutron API 2.0.
>
> However, it seems that the "default" security group is automatically
> created but it is not returned by Neutron client's
> `list_security_groups()` API call.  My code works just fine if I use any
> security group name other than "default".
>
> This is an example interaction, which shows that there is no security
> group returned for the project::
>
> >>> project.id
> u'b26ed1aa29e64c3abeade0a47867eee3'
> >>> response = self.neutron.list_security_groups()  # self.neutron is
> a neutron_client.v2.Client instance
> >>> secgroups = response['security_groups']
> >>> all_sg_ids = [(sg['id'], sg['tenant_id']) for sg in secgroups]
> >>> all_sg_ids
> [(u'01de4e38-55ea-4b82-8583-274b1bded41a', u'
> 0ff1f3d07fbd4d41892cdf85d7a7d1a9'), ... ]
> >>> len(all_sg_ids)
> 17
> >>> project_sg_ids = [(sg['id'], sg['tenant_id']) for sg in secgroups
> if sg['tenant_id'] == project.id]
> >>> project_sg_ids
> []
>
> Shouldn't the "default" security group be listed there?
>
> In more details, this is the code I'm using (which, again, works as
> expected if I use any security group name other than "default")::
>
> class Projects(object):
> def __init__(self):
> self.session = get_session()
> self.keystone = keystone_client.Client(session=self.session)
> self.neutron = neutron_client.Client(session=self.session)
> self.nova = nova_client('2', session=self.session)
> # ...
>
> # ...
>
> def create(self, form):
> domain = self.keystone.domains.get(
> config.os_project_domain_id)
> project = self.keystone.projects.create(
> form.name.data,
> domain,
> description=form.description.data,
> enabled=False,  # will enable after configuring it
> # ...
> )
> try:
> response = self.neutron.create_security_group({
> 'security_group': {
> 'tenant_id': project.id,
> 'name': 'default',  # works if I change to e.g.
> 'TEST'
> 'description': "Default security group",
> }
> })
> except Conflict:
> # security group already exists, fetch it
> # `find_security_group_by_name()` is a small filter
> # for `list_security_groups()` results
> default_sg = find_security_group_by_name(self.neutron,
> project.id, 'default')
> # ... do something with the sec group ...
>
> What am I doing wrong?
>
> Thanks,
> Riccardo
>
> --
> Riccardo Murri
> http://www.s3it.uzh.ch/about/team/#Riccardo.Murri
>
> S3IT: Services and Support for Science IT
> University of Zurich
> Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)
>
> Tel: +41 44 635 4208
> Fax: +41 44 635 6888
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-23 Thread Michael Bayer
On Sun, Jul 23, 2017 at 6:10 PM, Jay Pipes  wrote:
> Glad you brought this up, Mike. I was going to start a thread about this.
> Comments inline.
>
> On 07/23/2017 05:02 PM, Michael Bayer wrote:
> Well, besides that point (which I agree with), that is attempting to change
> an existing database schema migration, which is a no-no in my book ;)


OK this point has come up before and I always think there's a bit of
an over-broad kind of purism being applied (kind of like when someone
says, "never use global variables ever!" and I say, "OK so sys.modules
is out right ?" :)  ).

I agree with "never change a migration" to the extent that you should
never change the *SQL steps emitted for a database migration*.  That
is, if your database migration emitted "ALTER TABLE foobar foobar
foobar" on a certain target databse, that should never change.   No
problem there.

However, what we're doing here is adding new datatype logic for the
NDB backend which are necessarily different; not to mention that NDB
requires more manipulation of constraints to make certain changes
happen.  To make all that work, the *Python code that emits the SQL
for the migration* needs to have changes made, mostly (I would say
completely) in the form of new conditionals for NDB-specific concepts.
   In the case of the datatypes, the migrations will need to refer to
a SQLAlchemy type object that's been injected with the ndb-specific
logic when the NDB backend is present; I've made sure that when the
NDB backend is *not* present, the datatypes behave exactly the same as
before.

So basically, *SQL steps do not change*, but *Python code that emits
the SQL steps* necessarily has to change to accomodate for when the
"ndb" flag is present - this because these migrations have to run on
brand new ndb installations in order to create the database.   If Nova
and others did the initial "create database" without using the
migration files and instead used a create_all(), things would be
different, but that's not how things work (and also it is fine that
the migrations are used to build up the DB).

There is also the option to override the compilation for the base
SQLAlchemy String type does so that no change at all would be needed
to consuming projects in this area, but it seems like there is a need
to specify ndb-specific length arguments in some cases so keeping the
oslo_db-level API seems like it would be best.  (Note that the ndb
module in oslo_db *does* instrument the CreateTable construct globally
however, though it is very careful not to be involved unless the ndb
flag is present).




>
>> I can add these names up to oslo.db and then we would just need to
>> spread these out through all the open ndb reviews and then also patch
>> up Cinder which seems to be the only ndb implementation that's been
>> merged so far.
>
>
> +1
>
>> Keep in mind this is really me trying to correct my own mistake, as I
>> helped design and approved of the original approach here where
>> projects would be consuming against the "ndb." namespace.  However,
>> after seeing it in reviews how prevalent the use of this extremely
>> backend-specific name is, I think the use of the name should be much
>> less frequent throughout projects and only surrounding logic that is
>> purely to do with the ndb backend and no others.   At the datatype
>> level, the chance of future naming conflicts is very high and we
>> should fix this mistake (my mistake) before it gets committed
>> throughout many downstream projects.
>
>
> I had a private conversation with Octave on Friday. I had mentioned that I
> was upset I didn't know about the series of patches to oslo.db that added
> that module. I would certainly have argued against that approach. Please
> consider hitting me with a cluestick next time something of this nature pops
> up. :)
>
> Also, as I told Octave, I have no problem whatsoever with NDB Cluster. I
> actually think it's a pretty brilliant piece of engineering -- and have for
> over a decade since I worked at MySQL.
>
> My complaint regarding the code patch proposed to Nova was around the
> hard-coding of the ndb namespace into the model definitions.
>
> Best,
> -jay
>
>>
>> [1] https://review.openstack.org/#/c/427970/
>>
>> [2] https://review.openstack.org/#/c/446643/
>>
>> [3] https://review.openstack.org/#/c/446136/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] How do you manage your openstack utilisation?

2017-07-23 Thread Manuel Sopena Ballesteros
Dear Openstack community,

We are a medical research institute and we have been running HPC for many 
years, we started playing with Openstack a few months ago and we like it's 
flexibility to deploy multiple environments. However we are quite concern about 
the resource utilization, what I mean is that, in HPC the resources are 
released for the rest of the community once job has finished, however a VM 
keeps the resources for the owner of the vm until the instance is killed.

I would like to ask, how do you organize your resources used by Openstack to 
maximize utilization across the organization?

Thank you very much

Manuel Sopena Ballesteros | Big data Engineer
Garvan Institute of Medical Research
The Kinghorn Cancer Centre, 370 Victoria Street, Darlinghurst, NSW 2010
T: + 61 (0)2 9355 5760 | F: +61 (0)2 9295 8507 | E: 
manuel...@garvan.org.au

NOTICE
Please consider the environment before printing this email. This message and 
any attachments are intended for the addressee named and may contain legally 
privileged/confidential/copyright information. If you are not the intended 
recipient, you should not read, use, disclose, copy or distribute this 
communication. If you have received this message in error please notify us at 
once by return email and then delete both messages. We accept no liability for 
the distribution of viruses or similar in electronic communications. This 
notice should not be removed.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [neutron] out-of-tree l3 service providers

2017-07-23 Thread Kevin Benton
If I understand the main issue with using regular callbacks, it's mainly
just that the flavor assignment itself is in a callback, right?

If so, couldn't we solve the problem by just moving flavor assignment to an
explicit call before emitting the callbacks? Or would that still result in
other ordering issues?

On Thu, Jul 13, 2017 at 3:01 AM, Takashi Yamamoto 
wrote:

> hi,
>
> today i managed to play with l3 flavors.
> i wrote a crude patch to implement midonet flavor. [1]
>
> [1] https://review.openstack.org/#/c/483174/
>
> a good news is it's somehow working.
>
> a bad news is it has a lot of issues, as you can see in TODO comments
> in the patch.
> given these issues, now i tend to think it's cleaner to introduce
> ml2-like precommit/postcommit driver api (or its equivalent via
> callbacks) rather than using these existing notifications.
>
> how do you think?
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-23 Thread Jay Pipes
Glad you brought this up, Mike. I was going to start a thread about 
this. Comments inline.


On 07/23/2017 05:02 PM, Michael Bayer wrote:

I've been working with Octave Oregon in assisting with new rules and
datatypes that would allow projects to support the NDB storage engine
with MySQL.

To that end, we've made changes to oslo.db in [1] to support this, and
there are now a bunch of proposals such as [2] [3] to implement new
ndb-specific structures in projects.

The reviews for all downstream projects except Cinder are still under
review. While we have a chance to avoid a future naming problem, I am
making the following proposal:

Rather than having all the projects make use of
oslo_db.sqlalchemy.ndb.AutoStringTinyText / AutoStringSize, we add new
generic types to oslo.db :

oslo_db.sqlalchemy.types.SmallString
oslo_db.sqlalchemy.types.String


This is precisely what I was going to suggest because I was not going to 
go along with the whole injection of NDB-name-specific column types in 
Nova. :)



(or similar )

Internally, the ndb module would be mapping its implementation for
AutoStringTinyText and AutoStringSize to these types.   Functionality
would be identical, just the naming convention exported to downstream
consuming projects would no longer refer to "ndb." for
datatypes.

Reasons for doing so include:

1. openstack projects should be relying upon oslo.db to make the best
decisions for any given database backend, hardcoding as few
database-specific details as possible.   While it's unavoidable that
migration files will have some "if ndb:" kinds of blocks, for the
datatypes themselves, the "ndb." namespace defeats extensibility.


Right, my thoughts exactly.

if IBM wanted Openstack to run on DB2 (again?) and wanted to add a 
"db2.String" implementation to oslo.db for example, the naming and 
datatypes would need to be opened up as above in any case;  might as 
well make the change now before the patch sets are merged.


Yep.


2. The names "AutoStringTinyText" and "AutoStringSize" themselves are
confusing and inconsistent w/ each other (e.g. what is "auto"?  one is
"auto" if its String or TinyText and the other is "auto" if its
String, and..."size"?)


Yes. Oh God yes. The MySQL TINY/MEDIUM/BIG [INT|TEXT] data types were 
always entirely irrational and confusing. No need to perpetuate that 
terminology.



3. it's not clear (I don't even know right now by looking at these
reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
For example in 
https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py
I see a list of String(255)'s changed to one type or the other without
any clear notion why one would use one or the other.  Having names
that define simply the declared nature of the type would be most
appropriate.


Well, besides that point (which I agree with), that is attempting to 
change an existing database schema migration, which is a no-no in my book ;)



I can add these names up to oslo.db and then we would just need to
spread these out through all the open ndb reviews and then also patch
up Cinder which seems to be the only ndb implementation that's been
merged so far.


+1


Keep in mind this is really me trying to correct my own mistake, as I
helped design and approved of the original approach here where
projects would be consuming against the "ndb." namespace.  However,
after seeing it in reviews how prevalent the use of this extremely
backend-specific name is, I think the use of the name should be much
less frequent throughout projects and only surrounding logic that is
purely to do with the ndb backend and no others.   At the datatype
level, the chance of future naming conflicts is very high and we
should fix this mistake (my mistake) before it gets committed
throughout many downstream projects.


I had a private conversation with Octave on Friday. I had mentioned that 
I was upset I didn't know about the series of patches to oslo.db that 
added that module. I would certainly have argued against that approach. 
Please consider hitting me with a cluestick next time something of this 
nature pops up. :)


Also, as I told Octave, I have no problem whatsoever with NDB Cluster. I 
actually think it's a pretty brilliant piece of engineering -- and have 
for over a decade since I worked at MySQL.


My complaint regarding the code patch proposed to Nova was around the 
hard-coding of the ndb namespace into the model definitions.


Best,
-jay



[1] https://review.openstack.org/#/c/427970/

[2] https://review.openstack.org/#/c/446643/

[3] https://review.openstack.org/#/c/446136/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

Re: [openstack-dev] [docs][neutron] doc-migration status

2017-07-23 Thread Kevin Benton
Yeah, the networking guide does include configuration for some of the
sub-projects (e.g. BGP is at [1]). For the remaining ones there is work
that needs to be done because their docs live in wiki pages.

1.
https://docs.openstack.org/ocata/networking-guide/config-bgp-dynamic-routing.html


On Sun, Jul 23, 2017 at 1:32 PM, Doug Hellmann 
wrote:

> Excerpts from Kevin Benton's message of 2017-07-23 01:31:25 -0700:
> > Yeah, I was just thinking it makes it more explicit that we haven't just
> > skipped doing an admin guide for a particular project.
>
> Sure, you can do that. I don't think we want to link to all of those
> pages from the list of admin guides, though.
>
> I've updated the burndown chart generator to ignore the missing
> admin guide URLs for networking subprojects.
>
> I don't see configuration or installation guides for quite a few
> of those, either. Are those also handled within the neutron main
> tree docs?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-23 Thread Michael Bayer
I've been working with Octave Oregon in assisting with new rules and
datatypes that would allow projects to support the NDB storage engine
with MySQL.

To that end, we've made changes to oslo.db in [1] to support this, and
there are now a bunch of proposals such as [2] [3] to implement new
ndb-specific structures in projects.

The reviews for all downstream projects except Cinder are still under
review. While we have a chance to avoid a future naming problem, I am
making the following proposal:

Rather than having all the projects make use of
oslo_db.sqlalchemy.ndb.AutoStringTinyText / AutoStringSize, we add new
generic types to oslo.db :

oslo_db.sqlalchemy.types.SmallString
oslo_db.sqlalchemy.types.String

(or similar )

Internally, the ndb module would be mapping its implementation for
AutoStringTinyText and AutoStringSize to these types.   Functionality
would be identical, just the naming convention exported to downstream
consuming projects would no longer refer to "ndb." for
datatypes.

Reasons for doing so include:

1. openstack projects should be relying upon oslo.db to make the best
decisions for any given database backend, hardcoding as few
database-specific details as possible.   While it's unavoidable that
migration files will have some "if ndb:" kinds of blocks, for the
datatypes themselves, the "ndb." namespace defeats extensibility.  if
IBM wanted Openstack to run on DB2 (again?) and wanted to add a
"db2.String" implementation to oslo.db for example, the naming and
datatypes would need to be opened up as above in any case;  might as
well make the change now before the patch sets are merged.

2. The names "AutoStringTinyText" and "AutoStringSize" themselves are
confusing and inconsistent w/ each other (e.g. what is "auto"?  one is
"auto" if its String or TinyText and the other is "auto" if its
String, and..."size"?)

3. it's not clear (I don't even know right now by looking at these
reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
For example in 
https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py
I see a list of String(255)'s changed to one type or the other without
any clear notion why one would use one or the other.  Having names
that define simply the declared nature of the type would be most
appropriate.

I can add these names up to oslo.db and then we would just need to
spread these out through all the open ndb reviews and then also patch
up Cinder which seems to be the only ndb implementation that's been
merged so far.

Keep in mind this is really me trying to correct my own mistake, as I
helped design and approved of the original approach here where
projects would be consuming against the "ndb." namespace.  However,
after seeing it in reviews how prevalent the use of this extremely
backend-specific name is, I think the use of the name should be much
less frequent throughout projects and only surrounding logic that is
purely to do with the ndb backend and no others.   At the datatype
level, the chance of future naming conflicts is very high and we
should fix this mistake (my mistake) before it gets committed
throughout many downstream projects.


[1] https://review.openstack.org/#/c/427970/

[2] https://review.openstack.org/#/c/446643/

[3] https://review.openstack.org/#/c/446136/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][neutron] doc-migration status

2017-07-23 Thread Anne Gentle
On Sat, Jul 22, 2017 at 12:13 PM, Akihiro Motoki  wrote:
> Hi,
>
> I have a question on admin/ document related to the networking guide
> and would like to have advices from the documentation experts.
>
> It seems the check site by Doug expect all project have admin/ page.
> In the case of neutron the situation is a bit special. We have the
> networking guide as admin/ document
> in the neutron repo and it covers not only neutron itself but also
> neutron stadium projects.
> It means the neutron stadium projects sometimes (often?) have no
> admin/ directory in their own repos
> in favor of adding contents to the networking guide in neutron.
>
> Should Individual neutron stadium projects have their own admin guide
> in their repositories,
> or is it better to keep the networking guide which covers all
> networking stuff in a single guide?

It's better to keep the networking guide as close to what it is now as
possible, based on the web stats and prior input on that guide's
popularity.

Could you list all the neutron stadium projects? That might help
answer the next question, because if they require neutron, it seems
like a single networking guide is a good way forward. If my memory
serves, it was vpnaas, fwaas, and lbaas, and a whole lot more.
Reviewing 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/neutron-stadium.html
I see a lot more with varying amounts of docs. My general guideline at
first glance would be, if they require neutron, let's put them in the
networking guide in the neutron repo.

I may not say the same for storage, because that's a different set of
interdependencies and decisions for operators to make. I don't think
cinder ever had a stadium, for example. But, since the "neutron
stadium" was very contributor-specific, let's make sure we're meeting
consumer needs.

> What is the suggested way on the networking guide as the document expert?

Keep working with us with specific scenarios and details and we can
fill in a set of reader needs, hopefully.

Anne

>
> Thanks,
> Akihiro
>
> 2017-07-22 3:26 GMT+09:00 Doug Hellmann :
>> We've made huge progress, and are launching the updated landing
>> pages for docs.openstack.org as I write this. Thanks to all of the
>> contributors who have stepped up to write nearly 1,000 patches to
>> improve the health of our documentation!
>>
>> We still have around 70 URLs we expected to see after the migration
>> was complete but that produce a 404. I know some of the patches to
>> produce those pages are in progress, but please check the list at
>> https://doughellmann.com/doc-migration/ if your team is listed below
>> to ensure that nothing has been missed.
>>
>>   cinder
>>   cloudkitty
>>   congress
>>   designate
>>   heat
>>   ironic
>>   karbor
>>   keystone
>>   magnum
>>   manila
>>   murano
>>   neutron
>>   nova
>>   sahara
>>   senlin
>>   swift
>>   tacker
>>   telementry
>>   tricircle
>>   trove
>>   vitrage
>>   watcher
>>   zaqar
>>   zun
>>
>> Reply here or ping me in #openstack-docs if you have questions or need a
>> hand.
>>
>> Doug
>>
>> ___
>> OpenStack-docs mailing list
>> openstack-d...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Read my blog: justwrite.click
Subscribe to Docs|Code: docslikecode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [oslo.db] [relational database users] heads up for a MariaDB issue that will affect most projects

2017-07-23 Thread Michael Bayer
Hey list -

It appears that MariaDB as of version 10.2 has made an enhancement
that overall is great and fairly historic in the MySQL community,
they've made CHECK constraints finally work.   For all of MySQL's
existence, you could emit a CREATE TABLE statement that included CHECK
constraint, but the CHECK phrase would be silently ignored; there are
no actual CHECK constraints in MySQL.

Mariadb 10.2 has now made CHECK do something!  However!  the bad news!
 They have decided that the CHECK constraint against a single column
should not be implicitly dropped if you drop the column [1].   In case
you were under the impression your SQLAlchemy / oslo.db project
doesn't use CHECK constraints, if you are using the SQLAlchemy Boolean
type, or the "ENUM" type without using MySQL's native ENUM feature
(less likely), there's a simple CHECK constraint in there.

So far the Zun project has reported the first bug on Alembic [2] that
they can't emit a DROP COLUMN for a boolean column.In [1] I've
made my complete argument for why this decision on the MariaDB side is
misguided.   However, be on the lookout for boolean columns that can't
be DROPPED on some environments using newer MariaDB.  Workarounds for
now include:

1. when using Boolean(), set create_constraint=False

2. when using Boolean(), make sure it has a "name" to give the
constraint, so that later you can DROP CONSTRAINT easily

3. if not doing #1 and #2, in order to drop the column you need to use
the inspector (e.g. from sqlalchemy import inspect; inspector =
inspect(engine)) and locate all the CHECK constraints involving the
target column, and then drop them by name.

[1] https://jira.mariadb.org/browse/MDEV-4

[2] 
https://bitbucket.org/zzzeek/alembic/issues/440/cannot-drop-boolean-column-in-mysql

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][neutron] doc-migration status

2017-07-23 Thread Doug Hellmann
Excerpts from Kevin Benton's message of 2017-07-23 01:31:25 -0700:
> Yeah, I was just thinking it makes it more explicit that we haven't just
> skipped doing an admin guide for a particular project.

Sure, you can do that. I don't think we want to link to all of those
pages from the list of admin guides, though.

I've updated the burndown chart generator to ignore the missing
admin guide URLs for networking subprojects.

I don't see configuration or installation guides for quite a few
of those, either. Are those also handled within the neutron main
tree docs?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [group-based-policy] How to get Neutron ports with fixed IP when creating policy targets with Heat?

2017-07-23 Thread Lukas Garberg
Hi,

I have now confirmed that the package we have deployed does NOT contain the 
commit you referred to. Thanks for pointing it out!

Regards,
Lukas

On July 12, 2017 9:57:46 AM GMT+02:00, Sumit Naiksatam 
 wrote:
>Hi Lukas,
>
>Could you please confirm if you have the following commit in the
>package
>you have deployed:
>https://github.com/openstack/group-based-policy-automation/commit/ea1fb1725062e97ea2fa8d6af188b718876d9f89
>
>The above was a fix to the issue you are seeing.
>
>Thanks,
>Sumit.
>
>On Jul 11, 2017 3:28 PM, "Lukas Garberg"  wrote:
>
>> Hi all,
>>
>> I'm trying to create a heat template automating the creation of
>> group-based policy resources when deploying stacks. The template
>takes an
>> L3 policy as an input argument and then creates an L2 policy, a
>policy
>> target group and a policy target. I use GBP together with Cisco APIC
>on
>> OpenStack Mitaka.
>>
>> (Slightly simplified) Heat template:
>>  parameters:
>> l3p_main:
>>   type: string
>>   description: L3 policy name to use for main network interface
>>
>>   resources:
>> l2p_main:
>>   type: OS::GroupBasedPolicy::L2Policy
>>   properties:
>> name: { list_join: [ '_', [ { get_param: 'OS::stack_name' },
>'l2p'
>> ] ] }
>> l3_policy_id: { get_param: l3p_main }
>> shared: false
>>  ptg_main:
>>   type: OS::GroupBasedPolicy::PolicyTargetGroup
>>   properties:
>> name: { list_join: [ '_', [ { get_param: 'OS::stack_name' },
>'ptg'
>> ] ] }
>> l2_policy_id: { get_resource: l2p_main }
>> shared: false
>>
>> pt_main:
>>   type: OS::GroupBasedPolicy::PolicyTarget
>>   properties:
>> name: { list_join: [ '_', [ { get_param: 'OS::stack_name' },
>'pt'
>> ] ] }
>> policy_target_group_id: { get_resource: ptg_main }
>>
>> server:
>>   type: OS::Nova::Server
>>   properties:
>> networks:
>>  - port: { get_attr: [ pt_main, port_id ] }
>>
>> The stack create fails with the following error message (taken from
>> openstack stack show ... CLI command):
>>   | stack_status  | CREATE_FAILED
>>   |
>>   | stack_status_reason   | Resource CREATE failed: BadRequest:
>> resources.server: Port 49638f39-3e13-4813-b69f-efa2b3001c11 requires
>a
>> FixedIP in order to be used. (HTTP 400) (Request-ID:
>> req-4b6c465b-bb54-4eef-ae0b-d17e4a626c66) |
>>
>> Inspecting the neutron port referred to by the policy target which
>was
>> created gives the following:
>>   $ neutron port-show 49638f39-3e13-4813-b69f-efa2b3001c11
>>   +---+--+
>>   | Field | Value   |
>>   +---+--+
>>   | admin_state_up| True|
>>   | allowed_address_pairs | |
>>   | binding:vnic_type | normal  |
>>   | created_at| 2017-07-11T21:11:54 |
>>   | description   | |
>>   | device_id | |
>>   | device_owner  | |
>>   | extra_dhcp_opts   | |
>>   | fixed_ips | | <--
>empty
>>   | id| 49638f39-3e13-4813-b69f-efa2b3001c11 |
>>   | mac_address   | fa:16:3e:93:b2:25   |
>>   | name  | pt_foo_bar_test_pt  |
>>   | network_id| 72455662-1210-4aac-af70-8b19a974e0ea |
>>   | security_groups   | a3dd6bdc-bf85-4340-b305-166defc8e41c |
>>   | status| DOWN|
>>   | tenant_id | c0351d9a317f4b16b79ba7fa1fec4e0b |
>>   | updated_at| 2017-07-11T21:11:54 |
>>   +---+--+
>>
>> If I instead create a policy target manually with the GBP CLI client
>like
>> this:
>>   gbp pt-create --policy-target-group hello_ptg hello_test_pt
>>
>> The generated port looks like this:
>>   $ openstack port show 74ea24e4-8925-4173-ba13-6b0fd319c18e
>>   +---+---
>> ---+
>>   | Field | Value
>> |
>>   +---+---
>> ---+
>>   | admin_state_up| UP
>>  |
>>   | allowed_address_pairs |
>> |
>>   | binding_vnic_type | normal
>>  |
>>   | created_at| 2017-06-27T12:57:01
>>  |
>>   | description   | None
>>  |
>>   | device_id

[openstack-dev] [RDO] Major maintenance on review.rdoproject.org @ 2017-07-25 at 23:00 UTC

2017-07-23 Thread David Moreau Simard
Hi,

(+openstack-dev for wider audience)

Please note that we are planning a 2 hour maintenance window where
review.rdoproject.org could be unavailable this next tuesday evening,
July 25th at 23:00 UTC.
We will proceed with an upgrade of the instance software and will
migrate the review.rdoproject.org infrastructure to a new cloud
environment.

Please note that while this maintenance will have no impact on the
availability of RDO mirrors, new package builds or tags could be
delayed until the maintenance is finished.

If you have any questions, please do not hesitate to reply here.

Thanks !

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Configuring DVR.

2017-07-23 Thread Satish Patel
Is DVR good to go for production? I heard it's complex to troubleshoot and 
other downtime you need public IP on every compute node for internet gateway. 
What if you have 100 compute node ?  

Sent from my iPhone

> On Feb 20, 2017, at 12:33 AM, Ignazio Cassano  
> wrote:
> 
> Hello , you should read examples in networking guide:
> https://docs.openstack.org/newton/networking-guide/deploy-ovs-ha-dvr.html 
> after reading installation guide.
> Ignazio
> 
> 
> Il 20/Feb/2017 04:20, "Ken D'Ambrosio"  ha scritto:
>> Hey, all.  Launching my first Newton cloud, and we've decided to go with 
>> DVR.  I can't seem to find a "what changes there are, what's involved, and 
>> how to configure it" sort of informative-like page.  Hopefully, this just 
>> means I'm googling poorly.  Can someone point me in the right direction?
>> 
>> Thanks!
>> 
>> -Ken
>> 
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Production openstack on 5 node

2017-07-23 Thread Satish Patel
I found article from RDO website and it's very easy to setup but don't know 
about troubleshooting and complexity in production when you have many nodes. 

Sent from my iPhone

> On Jun 29, 2017, at 10:22 AM, Remo Mattei  wrote:
> 
> Can you share your info I was looking at that! 
> 
> Thanks
> 
> Inviato da iPhone
> 
>> Il giorno 29 giu 2017, alle ore 06:23, Satish Patel  
>> ha scritto:
>> 
>> I have implemented DVR in test environment but not sure how much it's 
>> popular in production in sense of complexity and management. 
>> 
>> Sent from my iPhone
>> 
>>> On Jun 25, 2017, at 4:30 PM, Remo Mattei  wrote:
>>> 
>>> Dvr is good option if you want to implement a way to have your vms always 
>>> available. It needs ovs. 
>>> 
>>> I still have not seen a doc details on implementation yet. 
>>> 
>>> :) 
>>> 
>>> Inviato da iPhone
>>> 
 Il giorno 25 giu 2017, alle ore 12:38, Satish Patel  
 ha scritto:
 
 Thank you folks!
 
 I have 5 node so following role i am planning to implement, I am
 little confused related network component, should i use DVR
 (distributed virtual router ) but problem is every single node need
 public IP address in that case, first time building production style
 openstack so don't know what would be the best option?
 
 1 controller + network
 4 compute node
 
> On Wed, Jun 21, 2017 at 10:22 PM, Mike Smith  
> wrote:
> Agree you should definitely check out what is already out there in 
> community
> if you are starting from scratch.  In our specific case, we do our puppet
> configs manually because 1) we are still on puppet v3 for everything else
> and 2) we have been doing it this way since Folsom and it’s worked well.
> 
> We are about to build another new cluster on Ocata and I think for that 
> one
> I will try and leverage more of the community puppet stuff if possible.
> 
> On Jun 21, 2017, at 1:49 PM, John van Ommen  
> wrote:
> 
> I couldn't agree more. OpenStack deployment is not a trivial task. Do not
> reinvent the wheel.
> 
> John
> 
> On Jun 21, 2017 12:04 PM, "Erik McCormick" 
> wrote:
>> 
>> If you are a Puppet shop you should check out the Puppet community
>> modules. You will get familiar with all the inner goo as you'll need your
>> own composition layer (in my case a series of hiera files). There's no
>> reason to reinvent the wheel unless you really want to.
>> 
>> -Erik
>> 
>>> On Jun 21, 2017 2:19 PM, "Satish Patel"  wrote:
>>> 
>>> Thank you all of you for your opinion, As mike suggested i am also
>>> planning to brew home grown puppet module to understand each and every
>>> component and their role instead of grabbing third party tool, I am
>>> sure OOO is best for 100 deploying 100 compute node but in my setup we
>>> have only 5 servers and its not worth it to manage undercloud server.
>>> 
>>> We are puppet shop so it would be easy to write own code and go from
>>> there.
>>> 
 On Wed, Jun 21, 2017 at 1:25 AM, Remo Mattei  wrote:
 I did a deployment with cs9 hp it was pretty bad. I hope the new one
 does
 better.
 
 Nevertheless I do not see many using hp out there. Maybe different
 regions
 like emea do better with that.
 
 Inviato da iPhone
 
 Il giorno 20 giu 2017, alle ore 21:10, John van Ommen
  ha scritto:
 
 At HPE we originally used TripleO but switched to a 'flat' model.
 
 I personally didn't see any advantage to Triple O. In theory, it should
 be
 easier to manage and upgrade. In the real world, Helion 3.0 and 4.0 are
 superior in every respect.
 
 John
 
> On Jun 20, 2017 9:02 PM, "Remo Mattei"  wrote:
> 
> I worked for Red Hat and they really want to get ooo going because the
> installation tools did never work as everyone was hoping. Before Red
> Hat I
> was at Mirantis and the fuel installation was nice now dead. I know
> ooo will
> go into containers next couple of release but kolla–Ansible is one of
> the
> emerging solutions now to get it out fast.
> 
> I am doing a project now where I am working on deploying ooo just
> finished
> the doc for Ocata undercloud.
> 
> Just my two cents to concord with Mike’s statement.
> 
> Remo
> 
> Inviato da iPhone
> 
> Il giorno 20 giu 2017, alle ore 20:51, Mike Smith
> 

[openstack-dev] [Oslo] Weekly meeting canceled on July 24

2017-07-23 Thread ChangBo Guo
I'm not able to chair the meeting tomorrow due to attend OpenStack Days
China event and we just issued the final releases of Oslo libraries for
Pike last week. It seems there is no urgent stuff to handle, sol let's skip
the meeting.


-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][neutron] doc-migration status

2017-07-23 Thread Kevin Benton
Yeah, I was just thinking it makes it more explicit that we haven't just
skipped doing an admin guide for a particular project.

On Sun, Jul 23, 2017 at 1:18 AM, Andreas Jaeger  wrote:

> On 07/22/2017 11:44 PM, Kevin Benton wrote:
>
>> Could we just put a placeholder in those subprojects /admin directories
>> that redirects to the networking guide?
>>
>
> You mean a single page that basically says :
>
> All information is covered in the `Networking Guide <
> https://docs.openstack.org/neutron/latest/admin/networking-guide`_
> .
>
> Yes, this could be done - do we really need this?
>
> Akihiro, having an admin/ directory is not a must-have,
>
> Andreas
>
> On Sat, Jul 22, 2017 at 10:50 AM, Doug Hellmann > > wrote:
>>
>> Excerpts from Akihiro Motoki's message of 2017-07-23 02:13:40 +0900:
>> > Hi,
>> >
>> > I have a question on admin/ document related to the networking guide
>> > and would like to have advices from the documentation experts.
>> >
>> > It seems the check site by Doug expect all project have admin/ page.
>> > In the case of neutron the situation is a bit special. We have the
>> > networking guide as admin/ document
>> > in the neutron repo and it covers not only neutron itself but also
>> > neutron stadium projects.
>> > It means the neutron stadium projects sometimes (often?) have no
>> > admin/ directory in their own repos
>> > in favor of adding contents to the networking guide in neutron.
>> >
>> > Should Individual neutron stadium projects have their own admin
>> guide
>> > in their repositories,
>> > or is it better to keep the networking guide which covers all
>> > networking stuff in a single guide?
>> >
>> > What is the suggested way on the networking guide as the document
>> expert?
>>
>> If the admin guides for all of those repos are combined, then I can
>> modify the burndown chart generator to not count those as needed. Let
>> me
>> know if that's the best approach.
>>
>> Doug
>>
>>  >
>>  > Thanks,
>>  > Akihiro
>>  >
>>  > 2017-07-22 3:26 GMT+09:00 Doug Hellmann > >:
>>  > > We've made huge progress, and are launching the updated landing
>>  > > pages for docs.openstack.org  as I
>>
>> write this. Thanks to all of the
>>  > > contributors who have stepped up to write nearly 1,000 patches to
>>  > > improve the health of our documentation!
>>  > >
>>  > > We still have around 70 URLs we expected to see after the
>> migration
>>  > > was complete but that produce a 404. I know some of the patches
>> to
>>  > > produce those pages are in progress, but please check the list at
>>  > > https://doughellmann.com/doc-migration/
>>  if your team is listed
>> below
>>  > > to ensure that nothing has been missed.
>>  > >
>>  > >   cinder
>>  > >   cloudkitty
>>  > >   congress
>>  > >   designate
>>  > >   heat
>>  > >   ironic
>>  > >   karbor
>>  > >   keystone
>>  > >   magnum
>>  > >   manila
>>  > >   murano
>>  > >   neutron
>>  > >   nova
>>  > >   sahara
>>  > >   senlin
>>  > >   swift
>>  > >   tacker
>>  > >   telementry
>>  > >   tricircle
>>  > >   trove
>>  > >   vitrage
>>  > >   watcher
>>  > >   zaqar
>>  > >   zun
>>  > >
>>  > > Reply here or ping me in #openstack-docs if you have questions
>> or need a
>>  > > hand.
>>  > >
>>  > > Doug
>>  > >
>>  > > ___
>>  > > OpenStack-docs mailing list
>>  > > openstack-d...@lists.openstack.org
>> 
>>  > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs
>> 
>>  >
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> --
>  Andreas Jaeger 

Re: [openstack-dev] [docs][neutron] doc-migration status

2017-07-23 Thread Andreas Jaeger

On 07/22/2017 11:44 PM, Kevin Benton wrote:
Could we just put a placeholder in those subprojects /admin directories 
that redirects to the networking guide?


You mean a single page that basically says :

All information is covered in the `Networking Guide 
> wrote:


Excerpts from Akihiro Motoki's message of 2017-07-23 02:13:40 +0900:
> Hi,
>
> I have a question on admin/ document related to the networking guide
> and would like to have advices from the documentation experts.
>
> It seems the check site by Doug expect all project have admin/ page.
> In the case of neutron the situation is a bit special. We have the
> networking guide as admin/ document
> in the neutron repo and it covers not only neutron itself but also
> neutron stadium projects.
> It means the neutron stadium projects sometimes (often?) have no
> admin/ directory in their own repos
> in favor of adding contents to the networking guide in neutron.
>
> Should Individual neutron stadium projects have their own admin guide
> in their repositories,
> or is it better to keep the networking guide which covers all
> networking stuff in a single guide?
>
> What is the suggested way on the networking guide as the document expert?

If the admin guides for all of those repos are combined, then I can
modify the burndown chart generator to not count those as needed. Let me
know if that's the best approach.

Doug

 >
 > Thanks,
 > Akihiro
 >
 > 2017-07-22 3:26 GMT+09:00 Doug Hellmann >:
 > > We've made huge progress, and are launching the updated landing
 > > pages for docs.openstack.org  as I
write this. Thanks to all of the
 > > contributors who have stepped up to write nearly 1,000 patches to
 > > improve the health of our documentation!
 > >
 > > We still have around 70 URLs we expected to see after the migration
 > > was complete but that produce a 404. I know some of the patches to
 > > produce those pages are in progress, but please check the list at
 > > https://doughellmann.com/doc-migration/
 if your team is listed below
 > > to ensure that nothing has been missed.
 > >
 > >   cinder
 > >   cloudkitty
 > >   congress
 > >   designate
 > >   heat
 > >   ironic
 > >   karbor
 > >   keystone
 > >   magnum
 > >   manila
 > >   murano
 > >   neutron
 > >   nova
 > >   sahara
 > >   senlin
 > >   swift
 > >   tacker
 > >   telementry
 > >   tricircle
 > >   trove
 > >   vitrage
 > >   watcher
 > >   zaqar
 > >   zun
 > >
 > > Reply here or ping me in #openstack-docs if you have questions
or need a
 > > hand.
 > >
 > > Doug
 > >
 > > ___
 > > OpenStack-docs mailing list
 > > openstack-d...@lists.openstack.org

 > >
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs

 >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev