[openstack-dev] [Mogan]Tasks update of Mogan Project

2017-07-24 Thread hao wang
Hi,

We are glad to present this week's tasks update of Mogan.

See the details below:

Essential Priorities
==

1. Track resources using placement service (liusheng, zhenguo)

blueprint: 
https://blueprints.launchpad.net/mogan/+spec/track-resources-using-placement

spec: https://review.openstack.org/#/c/475700/  merged

code:  https://review.openstack.org/#/c/476325/ merged

https://review.openstack.org/#/c/477426/ merged

https://review.openstack.org/#/c/477826/ abandoned

https://review.openstack.org/#/c/478357/ merged

https://review.openstack.org/#/c/478361/ merged

https://review.openstack.org/#/c/478403/ abandoned

https://review.openstack.org/#/c/478405/ abandoned

https://review.openstack.org/#/c/478406/ merged


2. Node aggregates (liudong, zhangyang, zhenguo)

blueprint: https://blueprints.launchpad.net/mogan/+spec/node-aggregate

spec: https://review.openstack.org/#/c/470927/


3. Server groups and scheduler hints(liudong, liusheng)

blueprint: 
https://blueprints.launchpad.net/mogan/+spec/server-group-api-extension
 https://blueprints.launchpad.net/mogan/+spec/support-schedule-hints

spec: None

code: scheduler hints: https://review.openstack.org/#/c/463534/


4. Adopt servers (wanghao, litao)

blueprint: https://blueprints.launchpad.net/mogan/+spec/manage-existing-bms

spec: https://review.openstack.org/#/c/459967/ merged

code:  https://review.openstack.org/#/c/479660/  Part-One
   https://review.openstack.org/#/c/481544/  Part-Two


5. Valence integration (zhenguo, shaohe, luyao, Xinran)  Move to next cycle.

blueprint: https://blueprints.launchpad.net/mogan/+spec/valence-integration

spec: 
https://review.openstack.org/#/c/441790/3/specs/pike/approved/valence-integration.rst

No updates

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Cleaning up inactive meetbot channels

2017-07-24 Thread Gary Kotton
FYI - https://review.openstack.org/486887
We should be using #openstack-neutron instead. 

On 7/20/17, 8:46 PM, "Graham Hayes"  wrote:

On 19/07/17 20:24, Jeremy Stanley wrote:
> For those who are unaware, Freenode doesn't allow any one user to
> /join more than 120 channels concurrently. This has become a
> challenge for some of the community's IRC bots in the past year,
> most recently the "openstack" meetbot (which not only handles
> meetings but also takes care of channel logging to
> eavesdrop.openstack.org and does the nifty bug number resolution
> some people seem to like).
> 
> I have run some rudimentary analysis and come up with the following
> list of channels which have had fewer than 10 lines said by anyone
> besides a bot over the past three months:
> 
> #craton
> #openstack-api
> #openstack-app-catalog
> #openstack-bareon
> #openstack-cloudpulse
> #openstack-community
> #openstack-cue
> #openstack-diversity
> #openstack-gluon
> #openstack-gslb

This should have been gone a while ago

https://review.openstack.org/#/q/topic:remove-openstack-gslb-irc

> #openstack-ko
> #openstack-kubernetes
> #openstack-networking-cisco
> #openstack-neutron-release
> #openstack-opw
> #openstack-pkg
> #openstack-product
> #openstack-python3
> #openstack-quota
> #openstack-rating
> #openstack-solar
> #openstack-swauth
> #openstack-ux
> #openstack-vmware-nsx
> #openstack-zephyr
> 
> I have a feeling many of these are either no longer needed, or what
> little and infrequent conversation they get used for could just as
> easily happen in a general channel like #openstack-dev or #openstack
> or maybe in the more active channel of their parent team for some
> subteams. Who would miss these if we ceased logging/using them? Does
> anyone want to help by asking around to people who might not see
> this thread, maybe by popping into those channels and seeing if any
> of the sleeping denizens awaken and say they still want to keep it
> around?
> 
> Ultimately we should improve our meetbot deployment to support
> sharding channels across multiple bots, but that will take some time
> to implement and needs volunteers willing to work on it. In the
> meantime we're running with the meetbot present in 120 channels and
> have at least one new channel that desires logging and can't get it
> until we whittle that number down.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] releasing tripleoclient final (pike)

2017-07-24 Thread Emilien Macchi
Hi!

This week is pike-3 milestone and also final release for clients,
which means we're going to cut a stable/pike branch for
python-tripleoclient.
It will be done by https://review.openstack.org/#/c/486657/ before
Wednesday EOD - please ping me if any blocker.

Once this is released, we'll accept backports into stable branch as
the policy do accept it:
https://docs.openstack.org/project-team-guide/stable-branches.html
Like usual, we won't backport non backward compatible change or
anything that would change user's behavior.
If there is something critical we have missed during this cycle and
that absolutely need to be in Pike - please reply to this thread and
we'll see what we can do.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] breaking changes, new container image parameter formats

2017-07-24 Thread Michał Jastrzębski
>>...
>>DockerInsecureRegistryAddress: 172.19.0.2:8787/tripleoupstream
>>DockerKeystoneImage: 172.19.0.2:8787/tripleoupstream/centos-binary-
>> keystone:latest
>>...

That's strange construction, are you sure guys that you don't want to
separate address:port from namespace? (tripleoupstream here).

Say you'd like to setup docker to point to insecure registry (add
--insecure-registry do systemd conf), that will take addr:port not
whole thing.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Michael Bayer
On Mon, Jul 24, 2017 at 5:10 PM, Octave J. Orgeron
 wrote:
> I don't think it makes sense to make these global. We don't need to change
> all occurrences of String(255) to TinyText for example. We make that
> determination through understanding the table structure and usage. But I do
> like the idea of changing the second option to ndb_size=, I think that makes
> things very clear. If you want to collapse the use cases.. what about?:
>
> oslo_db.sqlalchemy.String(255, ndb_type=TINYTEXT) -> VARCHAR(255) for most
> dbs, TINYTEXT for ndb
> oslo_db.sqlalchemy.String(4096, ndb_type=TEXT) -> VARCHAR(4096) for most
> dbs, TEXT for ndb
> oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on most dbs,
> VARCHAR(64) on ndb
>
> This way, we can override the String with TINYTEXT or TEXT or change the
> size for ndb.

OK.   See, originally when I was pushing for an ndb "dialect", that
hook lets us say String(255).with_variant(TEXT, "ndb") which is what I
was going for originally.  However, since we went with a special flag
and not a dialect, using ndb_type / ndb_size is *probably* fine.


>
>>
>> oslo_db.sqlalchemy.String(255) -> VARCHAR(255) on most dbs,
>> TINYTEXT() on ndb
>> oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on
>> most dbs, VARCHAR(64) on ndb
>> oslo_db.sqlalchemy.String(50) -> VARCHAR(50) on all dbs
>> oslo_db.sqlalchemy.String(64) -> VARCHAR(64) on all dbs
>> oslo_db.sqlalchemy.String(80) -> VARCHAR(64) on most dbs, TINYTEXT()
>> on ndb
>> oslo_db.sqlalchemy.String(80, ndb_size=55) -> VARCHAR(64) on most
>> dbs, VARCHAR(55) on ndb
>>
>> don't worry about implementation, can the above declaration ->
>> datatype mapping work ?
>>
>> Also where are we using AutoStringText(), it sounds like this is just
>> what SQLAlchemy calls the Text() datatype?   (e.g. an unlengthed
>> string type, comes out as CLOB etc).
>>
> In my patch for Neutron, you'll see a lot of the AutoStringText() calls to
> replace exceptionally long String columns (4096, 8192, and larger).

MySQL supports large VARCHAR now, OK.   yeah this could be
String(8192, ndb_type=TEXT) as well.


>
>
>
>
>>
>>
>>> In many cases, the use of these could be removed by simply changing the
>>> columns to more appropriate types and sizes. There is a tremendous amount
>>> of
>>> wasted space in many of the databases. I'm more than willing to help out
>>> with this if teams decide they would rather do that instead as the
>>> long-term
>>> solution. Until then, these functions enable the use of both with minimal
>>> impact.
>>>
>>> Another thing to keep in mind is that the only services that I've had to
>>> adjust column sizes for are:
>>>
>>> Cinder
>>> Neutron
>>> Nova
>>> Magnum
>>>
>>> The other services that I'm working on like Keystone, Barbican, Murano,
>>> Glance, etc. only need changes to:
>>>
>>> 1. Ensure that foreign keys are dropped and created in the correct order
>>> when changing things like indexes, constraints, etc. Many services do
>>> these
>>> proper steps already, there are just cases where this has been missed
>>> because InnoDB is very forgiving on this. But other databases are not.
>>> 2. Fixing the database migration and sync operations to use oslo.db, pass
>>> the right parameters, etc. Something that should have been done in the
>>> first
>>> place, but hasn't. So this more of a house cleaning step to insure that
>>> services are using oslo.db correctly.
>>>
>>> The only other oddball use case is deal with disabling nested
>>> transactions,
>>> where Neutron is the only one that does this.
>>>
>>> On the flip side, here is a short list of services that I haven't had to
>>> make ANY changes for other than having oslo.db 4.24 or above:
>>>
>>> aodh
>>> gnocchi
>>> heat
>>> ironic
>>> manila
>>>
 3. it's not clear (I don't even know right now by looking at these
 reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
 For example in

 https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py
 I see a list of String(255)'s changed to one type or the other without
 any clear notion why one would use one or the other.  Having names
 that define simply the declared nature of the type would be most
 appropriate.
>>>
>>>
>>> One has to look at what the column is being used for and decide what
>>> appropriate remediation steps are. This takes time and one must research
>>> what kind of data goes in the column, what puts it there, what consumes
>>> it,
>>> and what remediation would have the least amount of impact.
>>>
 I can add these names up to oslo.db and then we would just need to
 spread these out through all the open ndb reviews and then also patch
 up Cinder which seems to be the only ndb implementation that's been
 merged so far.

 Keep in mind this is really me trying to correct my own mistake, as I
 helped design and approved of the original approach here where
 

Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Octave J. Orgeron

Hi Michael,

Comments below..

On 7/24/2017 2:49 PM, Michael Bayer wrote:

On Mon, Jul 24, 2017 at 3:37 PM, Octave J. Orgeron
 wrote:

For these, here is a brief synopsis:

AutoStringTinyText, will convert a column to the TinyText type. This is used
for cases where a 255 varchar string needs to be converted to a text blob to
make the row fit within the NDB limits. If you are using ndb, it'll convert
it to TinyText, otherwise it leaves it alone. The reason that TinyText type
was chosen is because it'll hold the same 255 varchars and saves on space.

AutoStringText, does the same as the above, but converts the type to Text
and is meant for use cases where you need more than 255 varchar worth of
space. Good examples of these uses are where outputs of hypervisor and OVS
commands are dumped into the database.

AutoStringSize, you pass two parameters, one being the non-NDB size and the
second being the NDB size. The point here is where you need to reduce the
size of the column to fit within the NDB limits, but you want to preserve
the String varchar type because it might be used in a key, index, etc. I
only use these in cases where the impacts are very low.. for example where a
column is used for keeping track of status (up, down, active, inactive,
etc.) that don't require 255 varchars.

Can the "auto" that is supplied by AutoStringTinyText and
AutoStringSize be merged?


I don't think it makes sense to make these global. We don't need to 
change all occurrences of String(255) to TinyText for example. We make 
that determination through understanding the table structure and usage. 
But I do like the idea of changing the second option to ndb_size=, I 
think that makes things very clear. If you want to collapse the use 
cases.. what about?:


oslo_db.sqlalchemy.String(255, ndb_type=TINYTEXT) -> VARCHAR(255) for 
most dbs, TINYTEXT for ndb
oslo_db.sqlalchemy.String(4096, ndb_type=TEXT) -> VARCHAR(4096) for most 
dbs, TEXT for ndb
oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on most dbs, 
VARCHAR(64) on ndb


This way, we can override the String with TINYTEXT or TEXT or change the 
size for ndb.




oslo_db.sqlalchemy.String(255) -> VARCHAR(255) on most dbs,
TINYTEXT() on ndb
oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on
most dbs, VARCHAR(64) on ndb
oslo_db.sqlalchemy.String(50) -> VARCHAR(50) on all dbs
oslo_db.sqlalchemy.String(64) -> VARCHAR(64) on all dbs
oslo_db.sqlalchemy.String(80) -> VARCHAR(64) on most dbs, TINYTEXT() on ndb
oslo_db.sqlalchemy.String(80, ndb_size=55) -> VARCHAR(64) on most
dbs, VARCHAR(55) on ndb

don't worry about implementation, can the above declaration ->
datatype mapping work ?

Also where are we using AutoStringText(), it sounds like this is just
what SQLAlchemy calls the Text() datatype?   (e.g. an unlengthed
string type, comes out as CLOB etc).

In my patch for Neutron, you'll see a lot of the AutoStringText() calls 
to replace exceptionally long String columns (4096, 8192, and larger).








In many cases, the use of these could be removed by simply changing the
columns to more appropriate types and sizes. There is a tremendous amount of
wasted space in many of the databases. I'm more than willing to help out
with this if teams decide they would rather do that instead as the long-term
solution. Until then, these functions enable the use of both with minimal
impact.

Another thing to keep in mind is that the only services that I've had to
adjust column sizes for are:

Cinder
Neutron
Nova
Magnum

The other services that I'm working on like Keystone, Barbican, Murano,
Glance, etc. only need changes to:

1. Ensure that foreign keys are dropped and created in the correct order
when changing things like indexes, constraints, etc. Many services do these
proper steps already, there are just cases where this has been missed
because InnoDB is very forgiving on this. But other databases are not.
2. Fixing the database migration and sync operations to use oslo.db, pass
the right parameters, etc. Something that should have been done in the first
place, but hasn't. So this more of a house cleaning step to insure that
services are using oslo.db correctly.

The only other oddball use case is deal with disabling nested transactions,
where Neutron is the only one that does this.

On the flip side, here is a short list of services that I haven't had to
make ANY changes for other than having oslo.db 4.24 or above:

aodh
gnocchi
heat
ironic
manila


3. it's not clear (I don't even know right now by looking at these
reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
For example in
https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py
I see a list of String(255)'s changed to one type or the other without
any clear notion why one would use one or the other.  Having names
that define simply the declared nature of the type would be most
appropriate.


One has to look at what the column i

Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Michael Bayer
On Mon, Jul 24, 2017 at 3:37 PM, Octave J. Orgeron
 wrote:
> For these, here is a brief synopsis:
>
> AutoStringTinyText, will convert a column to the TinyText type. This is used
> for cases where a 255 varchar string needs to be converted to a text blob to
> make the row fit within the NDB limits. If you are using ndb, it'll convert
> it to TinyText, otherwise it leaves it alone. The reason that TinyText type
> was chosen is because it'll hold the same 255 varchars and saves on space.
>
> AutoStringText, does the same as the above, but converts the type to Text
> and is meant for use cases where you need more than 255 varchar worth of
> space. Good examples of these uses are where outputs of hypervisor and OVS
> commands are dumped into the database.
>
> AutoStringSize, you pass two parameters, one being the non-NDB size and the
> second being the NDB size. The point here is where you need to reduce the
> size of the column to fit within the NDB limits, but you want to preserve
> the String varchar type because it might be used in a key, index, etc. I
> only use these in cases where the impacts are very low.. for example where a
> column is used for keeping track of status (up, down, active, inactive,
> etc.) that don't require 255 varchars.

Can the "auto" that is supplied by AutoStringTinyText and
AutoStringSize be merged?


oslo_db.sqlalchemy.String(255) -> VARCHAR(255) on most dbs,
TINYTEXT() on ndb
oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on
most dbs, VARCHAR(64) on ndb
oslo_db.sqlalchemy.String(50) -> VARCHAR(50) on all dbs
oslo_db.sqlalchemy.String(64) -> VARCHAR(64) on all dbs
oslo_db.sqlalchemy.String(80) -> VARCHAR(64) on most dbs, TINYTEXT() on ndb
oslo_db.sqlalchemy.String(80, ndb_size=55) -> VARCHAR(64) on most
dbs, VARCHAR(55) on ndb

don't worry about implementation, can the above declaration ->
datatype mapping work ?

Also where are we using AutoStringText(), it sounds like this is just
what SQLAlchemy calls the Text() datatype?   (e.g. an unlengthed
string type, comes out as CLOB etc).




>
> In many cases, the use of these could be removed by simply changing the
> columns to more appropriate types and sizes. There is a tremendous amount of
> wasted space in many of the databases. I'm more than willing to help out
> with this if teams decide they would rather do that instead as the long-term
> solution. Until then, these functions enable the use of both with minimal
> impact.
>
> Another thing to keep in mind is that the only services that I've had to
> adjust column sizes for are:
>
> Cinder
> Neutron
> Nova
> Magnum
>
> The other services that I'm working on like Keystone, Barbican, Murano,
> Glance, etc. only need changes to:
>
> 1. Ensure that foreign keys are dropped and created in the correct order
> when changing things like indexes, constraints, etc. Many services do these
> proper steps already, there are just cases where this has been missed
> because InnoDB is very forgiving on this. But other databases are not.
> 2. Fixing the database migration and sync operations to use oslo.db, pass
> the right parameters, etc. Something that should have been done in the first
> place, but hasn't. So this more of a house cleaning step to insure that
> services are using oslo.db correctly.
>
> The only other oddball use case is deal with disabling nested transactions,
> where Neutron is the only one that does this.
>
> On the flip side, here is a short list of services that I haven't had to
> make ANY changes for other than having oslo.db 4.24 or above:
>
> aodh
> gnocchi
> heat
> ironic
> manila
>
>>
>> 3. it's not clear (I don't even know right now by looking at these
>> reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
>> For example in
>> https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py
>> I see a list of String(255)'s changed to one type or the other without
>> any clear notion why one would use one or the other.  Having names
>> that define simply the declared nature of the type would be most
>> appropriate.
>
>
> One has to look at what the column is being used for and decide what
> appropriate remediation steps are. This takes time and one must research
> what kind of data goes in the column, what puts it there, what consumes it,
> and what remediation would have the least amount of impact.
>
>>
>> I can add these names up to oslo.db and then we would just need to
>> spread these out through all the open ndb reviews and then also patch
>> up Cinder which seems to be the only ndb implementation that's been
>> merged so far.
>>
>> Keep in mind this is really me trying to correct my own mistake, as I
>> helped design and approved of the original approach here where
>> projects would be consuming against the "ndb." namespace.  However,
>> after seeing it in reviews how prevalent the use of this extremely
>> backend-specific name is, I think the use of the name should be much

Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Octave J. Orgeron

Hi Michael,

Comments below..

On 7/24/2017 9:13 AM, Michael Bayer wrote:

On Mon, Jul 24, 2017 at 10:01 AM, Jay Pipes  wrote:


I would much prefer to *add* a brand new schema migration that handles
conversion of the entire InnoDB schema at a certain point to an
NDB-compatible one *after* that point. That way, we isolate the NDB changes
to one specific schema migration -- and can point users to that one specific
migration in case bugs arise. This is the reason that every release we add a
number of "placeholder" schema migration numbered files to handle situations
such as these.

I understand that Oracle wants to support older versions of OpenStack in
their distribution and that's totally cool with me. But, the proper way IMHO
to do this kind of thing is to take one of the placeholder migrations and
use that as the NDB-conversion migration. I would posit that since Oracle
will need to keep some not-insignificant amount of Python code in their
distribution fork of Nova in order to bring in the oslo.db and Nova NDB
support, that it will actually be *easier* for them to maintain a *separate*
placeholder schema migration for all NDB conversion work instead of changing
an existing schema migration with a new patch.

OK, if it is feasible for the MySQL engine to build out the whole
schema as InnoDB and then do a migrate that changes the storage engine
of all tables to NDB and then also changes all the datatypes, that can
work.   If you want to go that way, then fine.


Unfortunately, to do that, you'd have to drop all of the constraints, 
foreign keys, and probably indexes before doing a change to table type. 
Then go back and put them in all into place. You also have to deal with 
changing your NDB cluster configuration to force all of the traffic to a 
single node since InnoDB tables are not replicated across an NDB 
cluster. So this is a lot more overhead for operators and introduces 
greater risks.





However, I may be missing something but I'm not seeing the practical
difference.   This new "ndb" migration still goes into the source
tree, still gets invoked for all users, and if the "if ndb_enabled()"
flag is somehow broken, it breaks just as well if it's in a brand new
migration vs. if it's in an old migration.

Suppose "if ndb_enabled(engine)" is somehow broken.  Either it crashes
the migrations, or it runs inappropriately.

If the conditional is in a brand new migration file that's pushed out
in Queens, *everybody* runs it when they upgrade, as well as when they
do fresh installation, and they get the breakage.

if the conditional is in havana 216, *everybody* gets it when they do
a fresh installation, and they get the breakage.   Upgraders do not.

How is "new migration" better than "make old migration compatible" ?

Again, fine by me if the other approach works, I'm just trying to see
where I'm being dense here.

Keep in mind that existing migrations *do* break and have to be fixed
- because while the migration files don't change, the databases they
talk to do.  The other thread I introduced about Mariadb 10.2 now
refusing to DROP columns that have a CHECK constraint is an example,
and will likely mean lots of old migration files across openstack
projects will need adjustments.



Exactly! I've seen plenty of cases where these scripts have been patched 
to fix problems that crop up in later migrations. So doing these changes 
is not that alien to OpenStack, even for Nova:


http://git.openstack.org/cgit/openstack/nova/log/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py

Also, another good point that everyone should be working on fixing is 
that with in MySQL 5.7.x you'll get warnings about duplicate keys, 
indexes, constraints, etc. that WILL NOT be supported in a future 
release. So these scripts have to be patched or MySQL support for these 
databases will be broken in the not so distant future.










All the best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Octave J. Orgeron

Hi Jay,

Comments below..

On 7/24/2017 8:01 AM, Jay Pipes wrote:

+Dan Smith

Good morning Mike :) Comments inline...

On 07/23/2017 08:05 PM, Michael Bayer wrote:

On Sun, Jul 23, 2017 at 6:10 PM, Jay Pipes  wrote:
Glad you brought this up, Mike. I was going to start a thread about 
this.

Comments inline.

On 07/23/2017 05:02 PM, Michael Bayer wrote:
Well, besides that point (which I agree with), that is attempting to 
change

an existing database schema migration, which is a no-no in my book ;)


OK this point has come up before and I always think there's a bit of
an over-broad kind of purism being applied (kind of like when someone
says, "never use global variables ever!" and I say, "OK so sys.modules
is out right ?" :)  ).


I'm not being a purist. I'm being a realist :) See below...


I agree with "never change a migration" to the extent that you should
never change the *SQL steps emitted for a database migration*. That
is, if your database migration emitted "ALTER TABLE foobar foobar
foobar" on a certain target databse, that should never change. No
problem there.

However, what we're doing here is adding new datatype logic for the
NDB backend which are necessarily different; not to mention that NDB
requires more manipulation of constraints to make certain changes
happen.  To make all that work, the *Python code that emits the SQL
for the migration* needs to have changes made, mostly (I would say
completely) in the form of new conditionals for NDB-specific concepts.
In the case of the datatypes, the migrations will need to refer to
a SQLAlchemy type object that's been injected with the ndb-specific
logic when the NDB backend is present; I've made sure that when the
NDB backend is *not* present, the datatypes behave exactly the same as
before.


No disagreement here.


So basically, *SQL steps do not change*, but *Python code that emits
the SQL steps* necessarily has to change to accomodate for when the
"ndb" flag is present - this because these migrations have to run on
brand new ndb installations in order to create the database. If Nova
and others did the initial "create database" without using the
migration files and instead used a create_all(), things would be
different, but that's not how things work (and also it is fine that
the migrations are used to build up the DB).


So, I see your point here, but my concern here is that if we *modify* 
an existing schema migration that has already been tested to properly 
apply a schema change for MySQL/InnoDB and PostgreSQL with code that 
is specific to NDB, we introduce the potential for bugs where users 
report that the same migration works sometimes but fails other times.


I don't think that the testing issues should be a concern here because 
I've been working to make sure that the tests work with both InnoDB and 
NDB. It's a pain, but again, we are only talking about a handful of the 
services. Bottom line here is that if you are not using NDB, the changes 
have zero effect on your setup.




I would much prefer to *add* a brand new schema migration that handles 
conversion of the entire InnoDB schema at a certain point to an 
NDB-compatible one *after* that point. That way, we isolate the NDB 
changes to one specific schema migration -- and can point users to 
that one specific migration in case bugs arise. This is the reason 
that every release we add a number of "placeholder" schema migration 
numbered files to handle situations such as these.


The only problem with this approach is that it assumes you are on InnoDB 
to start out with, which is not the use case here. This is for new 
installations or ones that started out with NDB, so we have to start out 
with the base schema in the scripts working.


I understand that Oracle wants to support older versions of OpenStack 
in their distribution and that's totally cool with me. But, the proper 
way IMHO to do this kind of thing is to take one of the placeholder 
migrations and use that as the NDB-conversion migration. I would posit 
that since Oracle will need to keep some not-insignificant amount of 
Python code in their distribution fork of Nova in order to bring in 
the oslo.db and Nova NDB support, that it will actually be *easier* 
for them to maintain a *separate* placeholder schema migration for all 
NDB conversion work instead of changing an existing schema migration 
with a new patch.


And this is the whole point of the work that I'm doing. Getting upstream 
so that others can benefit and so that we don't have to waste cycles 
maintaining custom code. Instead, we do all of the work upstream and 
that will enable our customers to more easily upgrade from one release 
to another. FYI, we have been using NDB since version 2 of our product. 
We are working on version 4 right now.




All the best,
-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.open

Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Octave J. Orgeron

Hi Jay,

Comments below..

Thanks,
Octave

On 7/23/2017 4:10 PM, Jay Pipes wrote:
Glad you brought this up, Mike. I was going to start a thread about 
this. Comments inline.


On 07/23/2017 05:02 PM, Michael Bayer wrote:

I've been working with Octave Oregon in assisting with new rules and
datatypes that would allow projects to support the NDB storage engine
with MySQL.

To that end, we've made changes to oslo.db in [1] to support this, and
there are now a bunch of proposals such as [2] [3] to implement new
ndb-specific structures in projects.

The reviews for all downstream projects except Cinder are still under
review. While we have a chance to avoid a future naming problem, I am
making the following proposal:

Rather than having all the projects make use of
oslo_db.sqlalchemy.ndb.AutoStringTinyText / AutoStringSize, we add new
generic types to oslo.db :

oslo_db.sqlalchemy.types.SmallString
oslo_db.sqlalchemy.types.String


This is precisely what I was going to suggest because I was not going 
to go along with the whole injection of NDB-name-specific column types 
in Nova. :)



(or similar )

Internally, the ndb module would be mapping its implementation for
AutoStringTinyText and AutoStringSize to these types. Functionality
would be identical, just the naming convention exported to downstream
consuming projects would no longer refer to "ndb." for
datatypes.

Reasons for doing so include:

1. openstack projects should be relying upon oslo.db to make the best
decisions for any given database backend, hardcoding as few
database-specific details as possible.   While it's unavoidable that
migration files will have some "if ndb:" kinds of blocks, for the
datatypes themselves, the "ndb." namespace defeats extensibility.


Right, my thoughts exactly.

if IBM wanted Openstack to run on DB2 (again?) and wanted to add a 
"db2.String" implementation to oslo.db for example, the naming and 
datatypes would need to be opened up as above in any case;  might as 
well make the change now before the patch sets are merged.


Yep.


2. The names "AutoStringTinyText" and "AutoStringSize" themselves are
confusing and inconsistent w/ each other (e.g. what is "auto"? one is
"auto" if its String or TinyText and the other is "auto" if its
String, and..."size"?)


Yes. Oh God yes. The MySQL TINY/MEDIUM/BIG [INT|TEXT] data types were 
always entirely irrational and confusing. No need to perpetuate that 
terminology.


FYI, the TINYTEXT is part of the MySQL syntax and dialect. So it's not 
alien to MySQL folks.



3. it's not clear (I don't even know right now by looking at these
reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
For example in 
https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py

I see a list of String(255)'s changed to one type or the other without
any clear notion why one would use one or the other.  Having names
that define simply the declared nature of the type would be most
appropriate.


Well, besides that point (which I agree with), that is attempting to 
change an existing database schema migration, which is a no-no in my 
book ;)


Unfortunately, if we don't modify the scripts, we can't create the 
schemas on the NDB database. Tables have to fit in the row limits. So 
unless we have a way to override the scripts, we have to modify them.





I can add these names up to oslo.db and then we would just need to
spread these out through all the open ndb reviews and then also patch
up Cinder which seems to be the only ndb implementation that's been
merged so far.


+1


Keep in mind this is really me trying to correct my own mistake, as I
helped design and approved of the original approach here where
projects would be consuming against the "ndb." namespace. However,
after seeing it in reviews how prevalent the use of this extremely
backend-specific name is, I think the use of the name should be much
less frequent throughout projects and only surrounding logic that is
purely to do with the ndb backend and no others.   At the datatype
level, the chance of future naming conflicts is very high and we
should fix this mistake (my mistake) before it gets committed
throughout many downstream projects.


I had a private conversation with Octave on Friday. I had mentioned 
that I was upset I didn't know about the series of patches to oslo.db 
that added that module. I would certainly have argued against that 
approach. Please consider hitting me with a cluestick next time 
something of this nature pops up. :)


Also, as I told Octave, I have no problem whatsoever with NDB Cluster. 
I actually think it's a pretty brilliant piece of engineering -- and 
have for over a decade since I worked at MySQL.


My complaint regarding the code patch proposed to Nova was around the 
hard-coding of the ndb namespace into the model definitions.


Best,
-jay



[1] https://review.openstack.org/#/c/427970/

[2] https://review.openstack.org/#/c/446643/

[3] https://review.o

Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Octave J. Orgeron

Hi Mike,

Thanks for putting this together. Comments below..

Thanks,
Octave

On 7/23/2017 3:02 PM, Michael Bayer wrote:

I've been working with Octave Oregon in assisting with new rules and
datatypes that would allow projects to support the NDB storage engine
with MySQL.

To that end, we've made changes to oslo.db in [1] to support this, and
there are now a bunch of proposals such as [2] [3] to implement new
ndb-specific structures in projects.

The reviews for all downstream projects except Cinder are still under
review. While we have a chance to avoid a future naming problem, I am
making the following proposal:

Rather than having all the projects make use of
oslo_db.sqlalchemy.ndb.AutoStringTinyText / AutoStringSize, we add new
generic types to oslo.db :

oslo_db.sqlalchemy.types.SmallString
oslo_db.sqlalchemy.types.String

(or similar )

Internally, the ndb module would be mapping its implementation for
AutoStringTinyText and AutoStringSize to these types.   Functionality
would be identical, just the naming convention exported to downstream
consuming projects would no longer refer to "ndb." for
datatypes.


I think this would make sense.



Reasons for doing so include:

1. openstack projects should be relying upon oslo.db to make the best
decisions for any given database backend, hardcoding as few
database-specific details as possible.   While it's unavoidable that
migration files will have some "if ndb:" kinds of blocks, for the
datatypes themselves, the "ndb." namespace defeats extensibility.  if
IBM wanted Openstack to run on DB2 (again?) and wanted to add a
"db2.String" implementation to oslo.db for example, the naming and
datatypes would need to be opened up as above in any case;  might as
well make the change now before the patch sets are merged.


Agreed that this extra layer of abstraction could be used by DB2, 
MongoDB, etc.


2. The names "AutoStringTinyText" and "AutoStringSize" themselves are
confusing and inconsistent w/ each other (e.g. what is "auto"?  one is
"auto" if its String or TinyText and the other is "auto" if its
String, and..."size"?)


For these, here is a brief synopsis:

AutoStringTinyText, will convert a column to the TinyText type. This is 
used for cases where a 255 varchar string needs to be converted to a 
text blob to make the row fit within the NDB limits. If you are using 
ndb, it'll convert it to TinyText, otherwise it leaves it alone. The 
reason that TinyText type was chosen is because it'll hold the same 255 
varchars and saves on space.


AutoStringText, does the same as the above, but converts the type to 
Text and is meant for use cases where you need more than 255 varchar 
worth of space. Good examples of these uses are where outputs of 
hypervisor and OVS commands are dumped into the database.


AutoStringSize, you pass two parameters, one being the non-NDB size and 
the second being the NDB size. The point here is where you need to 
reduce the size of the column to fit within the NDB limits, but you want 
to preserve the String varchar type because it might be used in a key, 
index, etc. I only use these in cases where the impacts are very low.. 
for example where a column is used for keeping track of status (up, 
down, active, inactive, etc.) that don't require 255 varchars.


In many cases, the use of these could be removed by simply changing the 
columns to more appropriate types and sizes. There is a tremendous 
amount of wasted space in many of the databases. I'm more than willing 
to help out with this if teams decide they would rather do that instead 
as the long-term solution. Until then, these functions enable the use of 
both with minimal impact.


Another thing to keep in mind is that the only services that I've had to 
adjust column sizes for are:


Cinder
Neutron
Nova
Magnum

The other services that I'm working on like Keystone, Barbican, Murano, 
Glance, etc. only need changes to:


1. Ensure that foreign keys are dropped and created in the correct order 
when changing things like indexes, constraints, etc. Many services do 
these proper steps already, there are just cases where this has been 
missed because InnoDB is very forgiving on this. But other databases are 
not.
2. Fixing the database migration and sync operations to use oslo.db, 
pass the right parameters, etc. Something that should have been done in 
the first place, but hasn't. So this more of a house cleaning step to 
insure that services are using oslo.db correctly.


The only other oddball use case is deal with disabling nested 
transactions, where Neutron is the only one that does this.


On the flip side, here is a short list of services that I haven't had to 
make ANY changes for other than having oslo.db 4.24 or above:


aodh
gnocchi
heat
ironic
manila



3. it's not clear (I don't even know right now by looking at these
reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
For example in 
https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migra

Re: [openstack-dev] [docs][neutron] doc-migration status

2017-07-24 Thread Akihiro Motoki
2017-07-24 23:44 GMT+09:00 Doug Hellmann :
> Excerpts from Kevin Benton's message of 2017-07-23 14:19:51 -0700:
>> Yeah, the networking guide does include configuration for some of the
>> sub-projects (e.g. BGP is at [1]). For the remaining ones there is work
>> that needs to be done because their docs live in wiki pages.
>>
>> 1.
>> https://docs.openstack.org/ocata/networking-guide/config-bgp-dynamic-routing.html
>
> OK, that's good to know. It would be good to be consistent with the
> approach to the stadium projects, so we can either eliminate the list of
> projects from landing pages that show things like "all of the admin
> guides" or we can add the projects so users can find the docs. If
> they're all covered in the networking guide, we could include that
> information on the admin landing page, for example.

Thanks for the advise.

Yeah, we need some consistency across the stadium projects.
The installation and/or config guide can be covered by the networking
guide (admin guide).
Regarding the configuration reference, I think it is better to have
them in individual projects
as they are generated automatically from the code.

The stadium projects are roughly categorized into two:
service project (like FWaaS, BGP, SFC...) and back-end project
(networking-ovn/odl/midonet...).
Around the service projects, the networking guide looks the best place
to cover various things.
A single place will help readers.
Regarding back-end projects, individual projects have their own
contents in their repository
and I am not sure which is better.
Anyway we can discuss and get consensus in the neutron team.

> In the mean time, if someone from the neutron project will review
> the list of "Missing URLs" on https://doughellmann.com/doc-migration/
> and let me know which ones represent content included in other
> documents, I can update the burndown chart generator to reflect
> that.

I am usually checking the doc-migration burndown chart including missing URLs
on the neutron stadium projects.
The progress is generally good and the remaining things are as follows:
- Configuration reference for all listed projects are under review
  (all have been proposed and I am taking care of them).
- The installation guide of networking-odl is also under review and I
believe will land soon.
All progresses are tracked on the doc-migration etherpad (neutron section).

Akihiro

>
> Doug
>
>>
>>
>> On Sun, Jul 23, 2017 at 1:32 PM, Doug Hellmann 
>> wrote:
>>
>> > Excerpts from Kevin Benton's message of 2017-07-23 01:31:25 -0700:
>> > > Yeah, I was just thinking it makes it more explicit that we haven't just
>> > > skipped doing an admin guide for a particular project.
>> >
>> > Sure, you can do that. I don't think we want to link to all of those
>> > pages from the list of admin guides, though.
>> >
>> > I've updated the burndown chart generator to ignore the missing
>> > admin guide URLs for networking subprojects.
>> >
>> > I don't see configuration or installation guides for quite a few
>> > of those, either. Are those also handled within the neutron main
>> > tree docs?
>> >
>> > Doug
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] help required regarding devstack

2017-07-24 Thread Sean Dague
Also see - https://docs.openstack.org/devstack/latest/systemd.html

On 07/24/2017 02:09 PM, Kristi Nikolla wrote:
> Hi,
> 
> I do not know about VM migration, however for restarting nova the right 
> service names are devstack@n-api, devstack@n-cpu, etc. 
> 
> ubuntu@k2k1:~$ systemctl list-units | grep devstack
>   devstack@c-api.service  
> loaded active running   Devstack devstack@c-api.service
>   devstack@c-sch.service  
> loaded active running   Devstack devstack@c-sch.service
>   devstack@c-vol.service  
> loaded active running   Devstack devstack@c-vol.service
>   devstack@dstat.service  
> loaded active running   Devstack devstack@dstat.service
>   devstack@etcd.service   
> loaded active running   Devstack devstack@etcd.service
>   devstack@g-api.service  
> loaded active running   Devstack devstack@g-api.service
>   devstack@g-reg.service  
> loaded active running   Devstack devstack@g-reg.service
>   devstack@keystone.service   
> loaded active running   Devstack devstack@keystone.service
>   devstack@n-api-meta.service 
> loaded active running   Devstack devstack@n-api-meta.service
>   devstack@n-api.service  
> loaded active running   Devstack devstack@n-api.service
>   devstack@n-cauth.service
> loaded active running   Devstack devstack@n-cauth.service
>   devstack@n-cond.service 
> loaded active running   Devstack devstack@n-cond.service
>   devstack@n-cpu.service  
> loaded active running   Devstack devstack@n-cpu.service
>   devstack@n-novnc.service
> loaded active running   Devstack devstack@n-novnc.service
>   devstack@n-sch.service  
> loaded active running   Devstack devstack@n-sch.service
>   devstack@placement-api.service  
> loaded active running   Devstack devstack@placement-api.service
>   devstack@q-agt.service  
> loaded active running   Devstack devstack@q-agt.service
>   devstack@q-dhcp.service 
> loaded active running   Devstack devstack@q-dhcp.service
>   devstack@q-l3.service   
> loaded active running   Devstack devstack@q-l3.service
>   devstack@q-meta.service 
> loaded active running   Devstack devstack@q-meta.service
>   devstack@q-svc.service  
> loaded active running   Devstack devstack@q-svc.service
>   system-devstack.slice   
> loaded active activesystem-devstack.slice
> 
>> On Jul 24, 2017, at 9:43 AM, Ziad Nayyer  wrote:
>>
>> Dear,
>>
>> I am a PhD candidate at COMSATS, lahore, Pakistan. I am working on devstack. 
>> Just wanted to know whether it supports VM Migration between two devstacks 
>> installed on two different physical machines as currently I am unable to 
>> find any lead. Also please let me know how to restart a particular service 
>> on devstack version pike on centos7.
>>
>> The screen file is not being generated in devstack folder ->stack-screenrc 
>> and systemctl only restarts keystone not any other like nova-compute
>>
>> sudo systemctl restart devstack@keystone (works)
>> sudo systemctl restart devstack@nova
>> sudo systemctl restart devstack@nova-compute
>>
>> and any other does not work.
>>
>> I'll be very thankful.
>>
>>
>> -- 
>> Regards,
>>  
>> Muhammad Ziad Nayyer Dar
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: opens

[openstack-dev] [ironic] this week's priorities and subteam reports

2017-07-24 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. Docs due to the docs re-org - See 
http://lists.openstack.org/pipermail/openstack-dev/2017-July/119221.html
1.1. Ironic - 
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:doc-migration
1.2. ironic-inspector - 
https://review.openstack.org/#/q/status:open+project:openstack/ironic-inspector+branch:master+topic:doc-migration
1.3. molteniron - 
https://review.openstack.org/#/q/status:open+project:openstack/molteniron+branch:master+topic:doc-migration
1.4. seem done: ironic-lib, ironic-ui, ironic-python-agent, sushy, 
sushy-tools, python-ironic-inspector-client; bifrost delayed
1.5. TODO configuration guide for both ironic and ironic-inspector
2. Booting from volume:
2.1.  https://review.openstack.org/#/c/484032/ -- Create boot.ipxe upon 
start-up - In Review/Requires revision
2.2. https://review.openstack.org/#/c/215385 - Nova patch
2.3. https://review.openstack.org/#/c/472740 - Tempest Scenario
3. Rolling upgrades:
3.1.  'Add new dbsync command with first online data migration': 
https://review.openstack.org/#/c/408556/
4. Physnet awareness:
4.1. Tempest API test: https://review.openstack.org/#/c/470915/ 1x +2
4.2. Rolling upgrades:
5. ironicclient & ironic-inspector-client patches for release this week:
5.1. client: https://review.openstack.org/486677 1x +2


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 17 Jul 2017 and 24 Jul 2017)
- Ironic: 259 bugs (+2) + 260 wishlist items (+2). 31 new (+3), 211 in progress 
(+2), 0 critical (-1), 32 high and 31 incomplete
- Inspector: 14 bugs + 28 wishlist items. 1 new (-1), 12 in progress, 0 
critical (-1), 4 high and 3 incomplete
- Nova bugs with Ironic tag: 17. 4 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- not considered a priority, it's a 'do it always' thing
- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- Last week we identified an issue where a follow-up broke a BFV machine 
when progressing through to cleaning.
- This issue was identified, and some discussion has occurred regarding 
how to properly fix. We are going to remove the follow-up code for a better 
solution to fix the same problem.
-  https://review.openstack.org/#/c/484032/
- Last week we also achieved the first boot of a machine via a cinder 
volume in OpenStack CI
- 
http://logs.openstack.org/12/485812/6/check/gate-tempest-dsvm-ironic-ipa-wholedisk-bios-agent_ipmitool-tinyipa-ubuntu-xenial/375227c/
- Achieved via DNM patches - Project-config update has been proposed 
and presently has one +2.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/466333/ - Devstack changes or Boot 
from Volume - MERGED
https://review.openstack.org/#/c/484032/ -- Create boot.ipxe upon 
start-up - In Review/Requires revision
https://review.openstack.org/#/c/472740/ - Tempest test scenario 
for BFV
https://review.openstack.org/#/c/479326/ - BFV deploy follow-up - 
Requires revision - should be landed after the tempest test can be executed.
Nova:
https://review.openstack.org/#/c/215385 - Ironic: Support boot from 
cinder volume - In Review
https://review.openstack.org/#/c/468353 - Ironic: Get IP address 
for volume connector - --NOT REQUIRED FOR PIKE-- - Is a follow-up to the first 
nova patch to allow
Project-C

Re: [openstack-dev] help required regarding devstack

2017-07-24 Thread Kristi Nikolla
Hi,

I do not know about VM migration, however for restarting nova the right service 
names are devstack@n-api, devstack@n-cpu, etc. 

ubuntu@k2k1:~$ systemctl list-units | grep devstack
  devstack@c-api.service  
loaded active running   Devstack devstack@c-api.service
  devstack@c-sch.service  
loaded active running   Devstack devstack@c-sch.service
  devstack@c-vol.service  
loaded active running   Devstack devstack@c-vol.service
  devstack@dstat.service  
loaded active running   Devstack devstack@dstat.service
  devstack@etcd.service   
loaded active running   Devstack devstack@etcd.service
  devstack@g-api.service  
loaded active running   Devstack devstack@g-api.service
  devstack@g-reg.service  
loaded active running   Devstack devstack@g-reg.service
  devstack@keystone.service   
loaded active running   Devstack devstack@keystone.service
  devstack@n-api-meta.service 
loaded active running   Devstack devstack@n-api-meta.service
  devstack@n-api.service  
loaded active running   Devstack devstack@n-api.service
  devstack@n-cauth.service
loaded active running   Devstack devstack@n-cauth.service
  devstack@n-cond.service 
loaded active running   Devstack devstack@n-cond.service
  devstack@n-cpu.service  
loaded active running   Devstack devstack@n-cpu.service
  devstack@n-novnc.service
loaded active running   Devstack devstack@n-novnc.service
  devstack@n-sch.service  
loaded active running   Devstack devstack@n-sch.service
  devstack@placement-api.service  
loaded active running   Devstack devstack@placement-api.service
  devstack@q-agt.service  
loaded active running   Devstack devstack@q-agt.service
  devstack@q-dhcp.service 
loaded active running   Devstack devstack@q-dhcp.service
  devstack@q-l3.service   
loaded active running   Devstack devstack@q-l3.service
  devstack@q-meta.service 
loaded active running   Devstack devstack@q-meta.service
  devstack@q-svc.service  
loaded active running   Devstack devstack@q-svc.service
  system-devstack.slice   
loaded active activesystem-devstack.slice

> On Jul 24, 2017, at 9:43 AM, Ziad Nayyer  wrote:
> 
> Dear,
> 
> I am a PhD candidate at COMSATS, lahore, Pakistan. I am working on devstack. 
> Just wanted to know whether it supports VM Migration between two devstacks 
> installed on two different physical machines as currently I am unable to find 
> any lead. Also please let me know how to restart a particular service on 
> devstack version pike on centos7.
> 
> The screen file is not being generated in devstack folder ->stack-screenrc 
> and systemctl only restarts keystone not any other like nova-compute
> 
> sudo systemctl restart devstack@keystone (works)
> sudo systemctl restart devstack@nova
> sudo systemctl restart devstack@nova-compute
> 
> and any other does not work.
> 
> I'll be very thankful.
> 
> 
> -- 
> Regards,
>  
> Muhammad Ziad Nayyer Dar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]

2017-07-24 Thread Morales, Victor
Hi Meher,

Can you include the paste for this? Can you also verify that your 
global-requirements file contains the tricircleclient module[1]?.  It seems 
like the pip_install_gr function [2][3] is not resolving [4] the 
tricircleclient module[5]

Regards,
Victor Morales

[1] 
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L233
[2] https://github.com/openstack/tricircle/blob/master/devstack/plugin.sh#L323
[3] https://github.com/openstack-dev/devstack/blob/master/inc/python#L59-L70
[4] https://github.com/openstack-dev/devstack/blob/master/inc/python#L362-L372
[5] https://pypi.python.org/pypi/tricircleclient


From: "meher.h...@orange.com" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, July 24, 2017 at 8:57 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [tricircle]

Hello to the community of Openstack-Tricircle,

I hope you are all well! I postulated the problem that I encountered during the 
trial of "Single pod installation with DevStack", I managed to deploy the 
solution by trying the install on Ubuntu and not on RHEL.

Now I try "Multi-pod Installation with DevStack". On the first node, the script 
stops with an error of this form: "[ERROR] / opt / stack / devstack / inc / 
python: 256 Can not find package tricircleclient in requirements". I wanted to 
know if you have any idea how to fix this error.

I thank you in advance!

Meher

[ogo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com



_



Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc

pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler

a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,

Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.



This message and its attachments may contain confidential or privileged 
information that may be protected by law;

they should not be distributed, used or copied without authorisation.

If you have received this email in error, please notify the sender and delete 
this message and its attachments.

As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.

Thank you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] trying to understand CORS deprecation

2017-07-24 Thread Jay Pipes

On 07/24/2017 12:43 PM, Sean Dague wrote:

I'm trying to knock out common deprecation messages which are generating
noise in the test runs. There is a deprecation message emitted all the
time during test runs in Nova which is:

DeprecationWarning: Method 'CORS.set_latent()' has moved to
'method.set_defaults()': CORS.set_latent has been deprecated in favor of
oslo_middleware.cors.set_defaults"

But from what I can see it's primary caller is the cors middleware
itself -
https://github.com/openstack/oslo.middleware/blob/1cf39ee5c3739c18fed78946532438550f56356f/oslo_middleware/cors.py#L133-L137


At least I'm having a hard time finding anyone else in this stack
calling set_latent. Is this just a circular bug on the cors.py module?



FYI: https://bugs.launchpad.net/oslo.middleware/+bug/1642008

It's bothered me for a while but never been able to get to the bottom of it.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] trying to understand CORS deprecation

2017-07-24 Thread Sean Dague
I'm trying to knock out common deprecation messages which are generating
noise in the test runs. There is a deprecation message emitted all the
time during test runs in Nova which is:

DeprecationWarning: Method 'CORS.set_latent()' has moved to
'method.set_defaults()': CORS.set_latent has been deprecated in favor of
oslo_middleware.cors.set_defaults"

But from what I can see it's primary caller is the cors middleware
itself -
https://github.com/openstack/oslo.middleware/blob/1cf39ee5c3739c18fed78946532438550f56356f/oslo_middleware/cors.py#L133-L137


At least I'm having a hard time finding anyone else in this stack
calling set_latent. Is this just a circular bug on the cors.py module?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Looking for a good end-to-end demo of ironic integrated within openstack

2017-07-24 Thread Sam Betts (sambetts)
Hey Greg,

The Ironic deploy agent images are ramdisk images which include the 
ironic-python-agent https://docs.openstack.org/ironic-python-agent/latest/ 
Which is a tool build by the ironic team and used by ironic to deploy and 
cleanup the baremetal nodes.

The cirros images are just the default images loaded by devstack, I believe by 
default it downloads them from the cirros http://download.cirros-cloud.net

Sam

On 24/07/2017, 17:07, "Waines, Greg" 
mailto:greg.wai...@windriver.com>> wrote:

Hey Lucas,

Thanks for the pointer to this ironic devstack setup using VMs as baremetal 
servers.
I was able to follow the recipe and get this working and play with ironic.

Of course I’ve got some follow up questions.


Questions on the images:
stack@devstack-ironic:~/devstack$ glance image-list
+--++
| ID   | Name   |
+--++
| 8091821f-a731-409c-a2fe-8986be444937 | cirros-0.3.5-x86_64-disk   |
| 087602b0-32b8-4b0d-823d-e3880614368f | cirros-0.3.5-x86_64-uec|
| 44bda48e-e1a2-4680-9067-ceb5a3b0d150 | cirros-0.3.5-x86_64-uec-kernel |
| 2027800b-4310-4bdc-a003-d4925e116f47 | cirros-0.3.5-x86_64-uec-ramdisk|
| a47a03ca-e504-4f3a-a464-3a4815b89709 | ir-deploy-agent_ipmitool.initramfs |
| 8ae53801-de05-44e6-88d4-0e04738da9b7 | ir-deploy-agent_ipmitool.kernel|
+--++
stack@devstack-ironic:~/devstack$


· so ‘cirros-0.3.5-x86_64-disk’ is the normal typical default cirros 
image for VMs in devstack

· the ironic devstack config/setup must have setup these other images,
QUESTIONS:

o   how were the ‘cirros-0.3.5-x86_64-uec...’ images created ?

§  were they generated from cirros-0.3.5-x86_64-disk image using glance or an 
external tool ?

· e.g. https://docs.openstack.org/diskimage-builder/latest/  ???

§  or

§  were they downloaded from some cirros distribution site ?




o   the ‘ir-deploy-agent_ipmitool.initramfs/kernel’ images

§  what is the role of these images ?

§  (feel free to point me to a description of this in ironic documentation)

§  e.g.

· is it specific to the “test” environment of using VMs as fake bare 
metal servers ?

· is this image generic regardless of specific end-user image (cirros, 
ubuntu, centos, ...) being put on the bare metal server ?

· is this image being used for preparing / cleaning the bare metal 
server (e.g. wiping non-root disks, etc) ...
prior to putting on the end-user image ?


Greg.



From: Lucas Alvares Gomes 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Thursday, July 20, 2017 at 10:52 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [ironic] Looking for a good end-to-end demo of 
ironic integrated within openstack

Hi Greg,

> I’m an ironic newbie ...
>

First of, welcome to the community (-:

> where can I find a good / relatively-current (e.g. PIKE) demo of Ironic
> integrated within OpenStack ?
>

I would recommend deploying it with DevStack on a VM and playing with it, you 
can follow this document in order to do it: 
https://docs.openstack.org/ironic/latest/contributor/dev-quickstart.html#deploying-ironic-with-devstack

Hope that helps,
Lucas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Looking for a good end-to-end demo of ironic integrated within openstack

2017-07-24 Thread Waines, Greg
Hey Lucas,

Thanks for the pointer to this ironic devstack setup using VMs as baremetal 
servers.
I was able to follow the recipe and get this working and play with ironic.

Of course I’ve got some follow up questions.


Questions on the images:
stack@devstack-ironic:~/devstack$ glance image-list
+--++
| ID   | Name   |
+--++
| 8091821f-a731-409c-a2fe-8986be444937 | cirros-0.3.5-x86_64-disk   |
| 087602b0-32b8-4b0d-823d-e3880614368f | cirros-0.3.5-x86_64-uec|
| 44bda48e-e1a2-4680-9067-ceb5a3b0d150 | cirros-0.3.5-x86_64-uec-kernel |
| 2027800b-4310-4bdc-a003-d4925e116f47 | cirros-0.3.5-x86_64-uec-ramdisk|
| a47a03ca-e504-4f3a-a464-3a4815b89709 | ir-deploy-agent_ipmitool.initramfs |
| 8ae53801-de05-44e6-88d4-0e04738da9b7 | ir-deploy-agent_ipmitool.kernel|
+--++
stack@devstack-ironic:~/devstack$


· so ‘cirros-0.3.5-x86_64-disk’ is the normal typical default cirros 
image for VMs in devstack

· the ironic devstack config/setup must have setup these other images,
QUESTIONS:

ohow were the ‘cirros-0.3.5-x86_64-uec...’ images created ?

§  were they generated from cirros-0.3.5-x86_64-disk image using glance or an 
external tool ?

· e.g. https://docs.openstack.org/diskimage-builder/latest/  ???

§  or

§  were they downloaded from some cirros distribution site ?



othe ‘ir-deploy-agent_ipmitool.initramfs/kernel’ images

§  what is the role of these images ?

§  (feel free to point me to a description of this in ironic documentation)

§  e.g.

· is it specific to the “test” environment of using VMs as fake bare 
metal servers ?

· is this image generic regardless of specific end-user image (cirros, 
ubuntu, centos, ...) being put on the bare metal server ?

· is this image being used for preparing / cleaning the bare metal 
server (e.g. wiping non-root disks, etc) ...
prior to putting on the end-user image ?


Greg.



From: Lucas Alvares Gomes 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Thursday, July 20, 2017 at 10:52 AM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [ironic] Looking for a good end-to-end demo of 
ironic integrated within openstack

Hi Greg,

> I’m an ironic newbie ...
>

First of, welcome to the community (-:

> where can I find a good / relatively-current (e.g. PIKE) demo of Ironic
> integrated within OpenStack ?
>

I would recommend deploying it with DevStack on a VM and playing with it, you 
can follow this document in order to do it: 
https://docs.openstack.org/ironic/latest/contributor/dev-quickstart.html#deploying-ironic-with-devstack

Hope that helps,
Lucas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo.db] [relational database users] heads up for a MariaDB issue that will affect most projects

2017-07-24 Thread Michael Bayer
hey good news, the owner of the issue upstream found that the SQL
standard agrees with my proposed behavior.   So while this is current
MariaDB 10.2 / 10.3 behavior, hopefully it will be resolved in an
upcoming release within those series.   not sure of the timing though
so we may not be able to duck it.

On Mon, Jul 24, 2017 at 11:16 AM, Michael Bayer  wrote:
> On Mon, Jul 24, 2017 at 10:37 AM, Doug Hellmann  wrote:
>> Excerpts from Michael Bayer's message of 2017-07-23 16:39:20 -0400:
>>> Hey list -
>>>
>>> It appears that MariaDB as of version 10.2 has made an enhancement
>>> that overall is great and fairly historic in the MySQL community,
>>> they've made CHECK constraints finally work.   For all of MySQL's
>>> existence, you could emit a CREATE TABLE statement that included CHECK
>>> constraint, but the CHECK phrase would be silently ignored; there are
>>> no actual CHECK constraints in MySQL.
>>>
>>> Mariadb 10.2 has now made CHECK do something!  However!  the bad news!
>>>  They have decided that the CHECK constraint against a single column
>>> should not be implicitly dropped if you drop the column [1].   In case
>>> you were under the impression your SQLAlchemy / oslo.db project
>>> doesn't use CHECK constraints, if you are using the SQLAlchemy Boolean
>>> type, or the "ENUM" type without using MySQL's native ENUM feature
>>> (less likely), there's a simple CHECK constraint in there.
>>>
>>> So far the Zun project has reported the first bug on Alembic [2] that
>>> they can't emit a DROP COLUMN for a boolean column.In [1] I've
>>> made my complete argument for why this decision on the MariaDB side is
>>> misguided.   However, be on the lookout for boolean columns that can't
>>> be DROPPED on some environments using newer MariaDB.  Workarounds for
>>> now include:
>>>
>>> 1. when using Boolean(), set create_constraint=False
>>>
>>> 2. when using Boolean(), make sure it has a "name" to give the
>>> constraint, so that later you can DROP CONSTRAINT easily
>>>
>>> 3. if not doing #1 and #2, in order to drop the column you need to use
>>> the inspector (e.g. from sqlalchemy import inspect; inspector =
>>> inspect(engine)) and locate all the CHECK constraints involving the
>>> target column, and then drop them by name.
>>
>> Item 3 sounds like the description of a helper function we could add to
>> oslo.db for use in migration scripts.
>
> OK let me give a little bit more context, that if MariaDB holds steady
> here, I will have to implement #3 within Alembic itself (though yes,
> for SQLAlchemy-migrate, still needed :) ). MS SQL Server has the
> same limitation for CHECK constraints and Alembic provides for a
> SQL-only procedure that can run as a static SQL element on that
> backend; hopefully the same is possible for MySQL.
>
>
>
>>
>> Doug
>>
>>>
>>> [1] https://jira.mariadb.org/browse/MDEV-4
>>>
>>> [2] 
>>> https://bitbucket.org/zzzeek/alembic/issues/440/cannot-drop-boolean-column-in-mysql
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo.db] [relational database users] heads up for a MariaDB issue that will affect most projects

2017-07-24 Thread Michael Bayer
On Mon, Jul 24, 2017 at 10:37 AM, Doug Hellmann  wrote:
> Excerpts from Michael Bayer's message of 2017-07-23 16:39:20 -0400:
>> Hey list -
>>
>> It appears that MariaDB as of version 10.2 has made an enhancement
>> that overall is great and fairly historic in the MySQL community,
>> they've made CHECK constraints finally work.   For all of MySQL's
>> existence, you could emit a CREATE TABLE statement that included CHECK
>> constraint, but the CHECK phrase would be silently ignored; there are
>> no actual CHECK constraints in MySQL.
>>
>> Mariadb 10.2 has now made CHECK do something!  However!  the bad news!
>>  They have decided that the CHECK constraint against a single column
>> should not be implicitly dropped if you drop the column [1].   In case
>> you were under the impression your SQLAlchemy / oslo.db project
>> doesn't use CHECK constraints, if you are using the SQLAlchemy Boolean
>> type, or the "ENUM" type without using MySQL's native ENUM feature
>> (less likely), there's a simple CHECK constraint in there.
>>
>> So far the Zun project has reported the first bug on Alembic [2] that
>> they can't emit a DROP COLUMN for a boolean column.In [1] I've
>> made my complete argument for why this decision on the MariaDB side is
>> misguided.   However, be on the lookout for boolean columns that can't
>> be DROPPED on some environments using newer MariaDB.  Workarounds for
>> now include:
>>
>> 1. when using Boolean(), set create_constraint=False
>>
>> 2. when using Boolean(), make sure it has a "name" to give the
>> constraint, so that later you can DROP CONSTRAINT easily
>>
>> 3. if not doing #1 and #2, in order to drop the column you need to use
>> the inspector (e.g. from sqlalchemy import inspect; inspector =
>> inspect(engine)) and locate all the CHECK constraints involving the
>> target column, and then drop them by name.
>
> Item 3 sounds like the description of a helper function we could add to
> oslo.db for use in migration scripts.

OK let me give a little bit more context, that if MariaDB holds steady
here, I will have to implement #3 within Alembic itself (though yes,
for SQLAlchemy-migrate, still needed :) ). MS SQL Server has the
same limitation for CHECK constraints and Alembic provides for a
SQL-only procedure that can run as a static SQL element on that
backend; hopefully the same is possible for MySQL.



>
> Doug
>
>>
>> [1] https://jira.mariadb.org/browse/MDEV-4
>>
>> [2] 
>> https://bitbucket.org/zzzeek/alembic/issues/440/cannot-drop-boolean-column-in-mysql
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Michael Bayer
On Mon, Jul 24, 2017 at 10:01 AM, Jay Pipes  wrote:

> I would much prefer to *add* a brand new schema migration that handles
> conversion of the entire InnoDB schema at a certain point to an
> NDB-compatible one *after* that point. That way, we isolate the NDB changes
> to one specific schema migration -- and can point users to that one specific
> migration in case bugs arise. This is the reason that every release we add a
> number of "placeholder" schema migration numbered files to handle situations
> such as these.
>
> I understand that Oracle wants to support older versions of OpenStack in
> their distribution and that's totally cool with me. But, the proper way IMHO
> to do this kind of thing is to take one of the placeholder migrations and
> use that as the NDB-conversion migration. I would posit that since Oracle
> will need to keep some not-insignificant amount of Python code in their
> distribution fork of Nova in order to bring in the oslo.db and Nova NDB
> support, that it will actually be *easier* for them to maintain a *separate*
> placeholder schema migration for all NDB conversion work instead of changing
> an existing schema migration with a new patch.

OK, if it is feasible for the MySQL engine to build out the whole
schema as InnoDB and then do a migrate that changes the storage engine
of all tables to NDB and then also changes all the datatypes, that can
work.   If you want to go that way, then fine.

However, I may be missing something but I'm not seeing the practical
difference.   This new "ndb" migration still goes into the source
tree, still gets invoked for all users, and if the "if ndb_enabled()"
flag is somehow broken, it breaks just as well if it's in a brand new
migration vs. if it's in an old migration.

Suppose "if ndb_enabled(engine)" is somehow broken.  Either it crashes
the migrations, or it runs inappropriately.

If the conditional is in a brand new migration file that's pushed out
in Queens, *everybody* runs it when they upgrade, as well as when they
do fresh installation, and they get the breakage.

if the conditional is in havana 216, *everybody* gets it when they do
a fresh installation, and they get the breakage.   Upgraders do not.

How is "new migration" better than "make old migration compatible" ?

Again, fine by me if the other approach works, I'm just trying to see
where I'm being dense here.

Keep in mind that existing migrations *do* break and have to be fixed
- because while the migration files don't change, the databases they
talk to do.  The other thread I introduced about Mariadb 10.2 now
refusing to DROP columns that have a CHECK constraint is an example,
and will likely mean lots of old migration files across openstack
projects will need adjustments.








>
> All the best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][neutron] doc-migration status

2017-07-24 Thread Doug Hellmann
Excerpts from Kevin Benton's message of 2017-07-23 14:19:51 -0700:
> Yeah, the networking guide does include configuration for some of the
> sub-projects (e.g. BGP is at [1]). For the remaining ones there is work
> that needs to be done because their docs live in wiki pages.
> 
> 1.
> https://docs.openstack.org/ocata/networking-guide/config-bgp-dynamic-routing.html

OK, that's good to know. It would be good to be consistent with the
approach to the stadium projects, so we can either eliminate the list of
projects from landing pages that show things like "all of the admin
guides" or we can add the projects so users can find the docs. If
they're all covered in the networking guide, we could include that
information on the admin landing page, for example.

In the mean time, if someone from the neutron project will review
the list of "Missing URLs" on https://doughellmann.com/doc-migration/
and let me know which ones represent content included in other
documents, I can update the burndown chart generator to reflect
that.

Doug

> 
> 
> On Sun, Jul 23, 2017 at 1:32 PM, Doug Hellmann 
> wrote:
> 
> > Excerpts from Kevin Benton's message of 2017-07-23 01:31:25 -0700:
> > > Yeah, I was just thinking it makes it more explicit that we haven't just
> > > skipped doing an admin guide for a particular project.
> >
> > Sure, you can do that. I don't think we want to link to all of those
> > pages from the list of admin guides, though.
> >
> > I've updated the burndown chart generator to ignore the missing
> > admin guide URLs for networking subprojects.
> >
> > I don't see configuration or installation guides for quite a few
> > of those, either. Are those also handled within the neutron main
> > tree docs?
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo.db] [relational database users] heads up for a MariaDB issue that will affect most projects

2017-07-24 Thread Doug Hellmann
Excerpts from Michael Bayer's message of 2017-07-23 16:39:20 -0400:
> Hey list -
> 
> It appears that MariaDB as of version 10.2 has made an enhancement
> that overall is great and fairly historic in the MySQL community,
> they've made CHECK constraints finally work.   For all of MySQL's
> existence, you could emit a CREATE TABLE statement that included CHECK
> constraint, but the CHECK phrase would be silently ignored; there are
> no actual CHECK constraints in MySQL.
> 
> Mariadb 10.2 has now made CHECK do something!  However!  the bad news!
>  They have decided that the CHECK constraint against a single column
> should not be implicitly dropped if you drop the column [1].   In case
> you were under the impression your SQLAlchemy / oslo.db project
> doesn't use CHECK constraints, if you are using the SQLAlchemy Boolean
> type, or the "ENUM" type without using MySQL's native ENUM feature
> (less likely), there's a simple CHECK constraint in there.
> 
> So far the Zun project has reported the first bug on Alembic [2] that
> they can't emit a DROP COLUMN for a boolean column.In [1] I've
> made my complete argument for why this decision on the MariaDB side is
> misguided.   However, be on the lookout for boolean columns that can't
> be DROPPED on some environments using newer MariaDB.  Workarounds for
> now include:
> 
> 1. when using Boolean(), set create_constraint=False
> 
> 2. when using Boolean(), make sure it has a "name" to give the
> constraint, so that later you can DROP CONSTRAINT easily
> 
> 3. if not doing #1 and #2, in order to drop the column you need to use
> the inspector (e.g. from sqlalchemy import inspect; inspector =
> inspect(engine)) and locate all the CHECK constraints involving the
> target column, and then drop them by name.

Item 3 sounds like the description of a helper function we could add to
oslo.db for use in migration scripts.

Doug

> 
> [1] https://jira.mariadb.org/browse/MDEV-4
> 
> [2] 
> https://bitbucket.org/zzzeek/alembic/issues/440/cannot-drop-boolean-column-in-mysql
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Dan Smith
> So, I see your point here, but my concern here is that if we *modify* an
> existing schema migration that has already been tested to properly apply
> a schema change for MySQL/InnoDB and PostgreSQL with code that is
> specific to NDB, we introduce the potential for bugs where users report
> that the same migration works sometimes but fails other times.

This ^.

The same goes for really any sort of conditional in a migration where
you could end up with different schema. I know that is Mike's point (to
not have that happen) but I think the difficulty is proving and
guaranteeing (now and going forward) that they're identical. Modifying a
migration in the past is like a late-breaking conditional.

> I would much prefer to *add* a brand new schema migration that handles
> conversion of the entire InnoDB schema at a certain point to an
> NDB-compatible one *after* that point. That way, we isolate the NDB
> changes to one specific schema migration -- and can point users to that
> one specific migration in case bugs arise. This is the reason that every
> release we add a number of "placeholder" schema migration numbered files
> to handle situations such as these.

Yes.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Jay Pipes

+Dan Smith

Good morning Mike :) Comments inline...

On 07/23/2017 08:05 PM, Michael Bayer wrote:

On Sun, Jul 23, 2017 at 6:10 PM, Jay Pipes  wrote:

Glad you brought this up, Mike. I was going to start a thread about this.
Comments inline.

On 07/23/2017 05:02 PM, Michael Bayer wrote:
Well, besides that point (which I agree with), that is attempting to change
an existing database schema migration, which is a no-no in my book ;)


OK this point has come up before and I always think there's a bit of
an over-broad kind of purism being applied (kind of like when someone
says, "never use global variables ever!" and I say, "OK so sys.modules
is out right ?" :)  ).


I'm not being a purist. I'm being a realist :) See below...


I agree with "never change a migration" to the extent that you should
never change the *SQL steps emitted for a database migration*.  That
is, if your database migration emitted "ALTER TABLE foobar foobar
foobar" on a certain target databse, that should never change.   No
problem there.

However, what we're doing here is adding new datatype logic for the
NDB backend which are necessarily different; not to mention that NDB
requires more manipulation of constraints to make certain changes
happen.  To make all that work, the *Python code that emits the SQL
for the migration* needs to have changes made, mostly (I would say
completely) in the form of new conditionals for NDB-specific concepts.
In the case of the datatypes, the migrations will need to refer to
a SQLAlchemy type object that's been injected with the ndb-specific
logic when the NDB backend is present; I've made sure that when the
NDB backend is *not* present, the datatypes behave exactly the same as
before.


No disagreement here.


So basically, *SQL steps do not change*, but *Python code that emits
the SQL steps* necessarily has to change to accomodate for when the
"ndb" flag is present - this because these migrations have to run on
brand new ndb installations in order to create the database.   If Nova
and others did the initial "create database" without using the
migration files and instead used a create_all(), things would be
different, but that's not how things work (and also it is fine that
the migrations are used to build up the DB).


So, I see your point here, but my concern here is that if we *modify* an 
existing schema migration that has already been tested to properly apply 
a schema change for MySQL/InnoDB and PostgreSQL with code that is 
specific to NDB, we introduce the potential for bugs where users report 
that the same migration works sometimes but fails other times.


I would much prefer to *add* a brand new schema migration that handles 
conversion of the entire InnoDB schema at a certain point to an 
NDB-compatible one *after* that point. That way, we isolate the NDB 
changes to one specific schema migration -- and can point users to that 
one specific migration in case bugs arise. This is the reason that every 
release we add a number of "placeholder" schema migration numbered files 
to handle situations such as these.


I understand that Oracle wants to support older versions of OpenStack in 
their distribution and that's totally cool with me. But, the proper way 
IMHO to do this kind of thing is to take one of the placeholder 
migrations and use that as the NDB-conversion migration. I would posit 
that since Oracle will need to keep some not-insignificant amount of 
Python code in their distribution fork of Nova in order to bring in the 
oslo.db and Nova NDB support, that it will actually be *easier* for them 
to maintain a *separate* placeholder schema migration for all NDB 
conversion work instead of changing an existing schema migration with a 
new patch.


All the best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]

2017-07-24 Thread meher.hihi
Hello to the community of Openstack-Tricircle,

I hope you are all well! I postulated the problem that I encountered during the 
trial of "Single pod installation with DevStack", I managed to deploy the 
solution by trying the install on Ubuntu and not on RHEL.

Now I try "Multi-pod Installation with DevStack". On the first node, the script 
stops with an error of this form: "[ERROR] / opt / stack / devstack / inc / 
python: 256 Can not find package tricircleclient in requirements". I wanted to 
know if you have any idea how to fix this error.

I thank you in advance!

Meher

[Logo Orange]

Meher Hihi
Intern
ORANGE/IMT/OLN/WTC/CMA/MAX
Fixe : +33 2 96 07 03 
71
Mobile : +33 7 58 38 68 
87
meher.h...@orange.com



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] help required regarding devstack

2017-07-24 Thread Ziad Nayyer
Dear,

I am a PhD candidate at COMSATS, lahore, Pakistan. I am working on
devstack. Just wanted to know whether it supports VM Migration between two
devstacks installed on two different physical machines as currently I am
unable to find any lead. Also please let me know how to restart a
particular service on devstack version pike on centos7.

The screen file is not being generated in devstack folder ->stack-screenrc
and systemctl only restarts keystone not any other like nova-compute

sudo systemctl restart devstack@keystone (works)
sudo systemctl restart devstack@nova
sudo systemctl restart devstack@nova-compute

and any other does not work.

I'll be very thankful.


-- 
Regards,

Muhammad Ziad Nayyer Dar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] breaking changes, new container image parameter formats

2017-07-24 Thread Jiří Stránský

On 19.7.2017 14:41, Dan Prince wrote:

I wanted to give a quick heads up on some breaking changes that started
landing last week with regards to how container images are specified
with Heat parameters in TripleO. There are a few patches associated
with converting over to the new changes but the primary patches are
listed below here [1] and here [2].

Here are a few examples where I'm using a local (insecure) docker
registry on 172.19.0.2.

The old parameters were:

   
   DockerNamespaceIsRegistry: true
   DockerNamespace: 172.19.0.2:8787/tripleoupstream
   DockerKeystoneImage: centos-binary-keystone:latest
   ...

The new parameters simplify things quite a bit so that each
Docker*Image parameter contains the *entire* URL required to pull the
docker image. It ends up looking something like this:

   ...
   DockerInsecureRegistryAddress: 172.19.0.2:8787/tripleoupstream
   DockerKeystoneImage: 172.19.0.2:8787/tripleoupstream/centos-binary-
keystone:latest
   ...

The benefit of the new format is that it makes it possible to pull
images from multiple registries without first staging them to a local
docker registry. Also, we've removed the 'tripleoupstream' default
container names and now require them to be specified. Removing the
default should make it much more explicit that the end user has
specified container image names correctly and doesn't accidentally use
'tripleoupstream' by accident because one of the container image
parameters didn't get specified.


Additional info based on #tripleo discussion: To keep using the values 
that were the defaults, you need to add `-e 
$THT_PATH/environments/docker-centos-tripleoupstream.yaml` [3] to the 
`openstack overcloud deploy` command.



Finally the simplification of the
DockerInsecureRegistryAddress parameter into a single setting makes
things more clear to the end user as well.

A new python-tripleoclient command makes it possible to generate a
custom heat environment with defaults for your environment and
registry. For the examples above I can run 'overcloud container image
prepare' to generate a custom heat environment like this:

openstack overcloud container image prepare --
namespace=172.19.0.2:8787/tripleoupstream --env-
file=$HOME/containers.yaml

We choose not to implement backwards compatibility with the old image
formats as almost all of the Heat parameters here are net new in Pike
and as such have not yet been released yet. The changes here should
make it much easier to manage containers and work with other community
docker registries like RDO, etc.

[1] http://git.openstack.org/cgit/openstack/tripleo-heat-templates/comm
it/?id=e76d84f784d27a7a2d9e5f3a8b019f8254cb4d6c
[2] https://review.openstack.org/#/c/479398/17
[3] 
https://github.com/openstack/tripleo-heat-templates/blob/5cbcc8377c49e395dc1d02a976d9b4a94253f5ca/environments/docker-centos-tripleoupstream.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] An experiment with Ansible

2017-07-24 Thread James Slagle
On Mon, Jul 24, 2017 at 3:12 AM, Marios Andreou  wrote:
>
>
> On Fri, Jul 21, 2017 at 1:21 AM, James Slagle 
> wrote:
>>
>> Following up on the previous thread:
>> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
>>
>> I wanted to share some work I did around the prototype I mentioned
>> there. I spent a couple days exploring this idea. I came up with a
>> Python script that when run against an in progress Heat stack, will
>> pull all the server and deployment metadata out of Heat and generate
>> ansible playbooks/tasks from the deployments.
>>
>> Here's the code:
>> https://github.com/slagle/pump
>>
>> And an example of what gets generated:
>> https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8
>>
>> If you're interested in any more detail, let me know.
>>
>> It signals the stack to completion with a dummy "ok" signal so that
>> the stack will complete. You can then use ansible-playbook to apply
>> the actual deloyments (in the expected order, respecting the steps
>> across all roles, and in parallel across all the roles).
>>
>> Effectively, this treats Heat as nothing but a yaml cruncher. When
>> using it with deployed-server, Heat doesn't actually change anything
>> on an overcloud node, you're only using it to generate ansible.
>
>
>
> Hi James,
>
> FYI this actually describes the current plan for Pike minor update [1] - the
> idea is to use the "openstack overcloud config download " (matbu++) to write
> the playbooks for each node from the deployed stack outputs. The minor
> update playbook(s) itself will be generated from new 'update_tasks' added to
> each of the service manifests (akin to the current upgrade_tasks). The plan
> is to disable the actual service config deployment steps so that we just get
> the stack outputs for the playbook generation.
>
> The effort is lead by shardy and he has posted reviews/comments on the
> etherpad @ [1] FYI (I know he is away this week so may not respond ++ I was
> struck by the similarity between what you described above and the consensus
> we seemed to reach towards the end of the week about the minor update plan,
> so I thought you and others may be interested to hear it).

Yes, I've been looking at that work as well. I'm not entirely sure
what the longer term goals are, although I like the approach we are
taking with updates. Looking at the patches that have been posted so
far, I'm not sure if they are meant to be Docker/container specific
only, or if they would work with the puppet services as well or any
SoftwareConfig group type.

I've pulled all the patches locally and was actually testing with a
puppet only stack for initial deployment (no stack update to
Containers), and the generated config/playbooks are not correct
(trying to do something with containers when they shouldn't).

I'm not sure if that is intended to work or if there is a bug. I can
check with shardy when he returns what the goals and further context
around that approach are.

I think one of the primary differences between that approach and what
I prototyped was that my goal was to completely eliminate the
os-collect-config -> Heat metadata Deployment "transport" for any
SoftwareConfig group type (puppet, script, hiera, ansible). IME, that
has been one of the most difficult aspects of TripleO for users and
operators to reason about, reproduce, troubleshoot, and understand.

An additional goal is to see if it would be possible to do that
entirely external to Heat and/or tripleo-heat-templates. Just
considering all the reviews that are currently in progress for the
"config download" approach, there is a lot of refactoring, output
changes, and yaql churning in tripleo-heat-templates.

Certainly the approaches are similar, and could even co-exist,
although they are tackling the problem from different angles.

>
> Your review @ /#/c/485303/ is slightly different in that it doesn't disable
> the deployment/postdeploy steps but signals completion to Heat. Haven't
> checked that review in  detail but first concern is can/do you catch it in
> time... I mean you start the heat stack update and have to immediately call
> the openstack overcloud signal , if I understood correctly?

Yes, you'd have to signal the deployments before they time out on the Heat side.

You could also configure the signal_transport to NO_SIGNAL, in which
case Heat would just create the stack to completion without waiting
(and thus possibly timing out) for any signals.





-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Migration from Neutron ML2OVS to OVN

2017-07-24 Thread Numan Siddique
On Wed, Jul 19, 2017 at 11:37 PM, Ben Nemec  wrote:

>
>
> On 07/18/2017 08:18 AM, Numan Siddique wrote:
>
>>
>>
>> On Thu, Jul 13, 2017 at 3:02 PM, Saravanan KR > > wrote:
>>
>> On Tue, Jul 11, 2017 at 11:40 PM, Ben Nemec > > wrote:
>> >
>> >
>> > On 07/11/2017 10:17 AM, Numan Siddique wrote:
>> >>
>> >> Hello Tripleo team,
>> >>
>> >> I have few questios regarding migration from neutron ML2OVS to
>> OVN. Below
>> >> are some of the requirements
>> >>
>> >>   - We want to migrate an existing depoyment from Neutroon default
>> ML2OVS
>> >> to OVN
>> >>   - We are targetting this for tripleo Queen's release.
>> >>   - The plan is to first upgrade the tripleo deployment from Pike
>> to
>> >> Queens with no changes to neutron. i.e with neutron ML2OVS. Once
>> the upgrade
>> >> is done, we want to migrate to OVN.
>> >>   - The migration process will stop all the neutron agents,
>> configure
>> >> neutron server to load OVN mechanism driver and start OVN services
>> (with no
>> >> or very limited datapath downtime).
>> >>   - The migration would be handled by an ansible script. We have a
>> PoC
>> >> ansible script which can be found here [1]
>> >>
>> >> And the questions are
>> >> -  (A broad question) - What is the right way to migrate and
>> switch the
>> >> neutron plugin ? Can the stack upgrade handle the migration as
>> well ?
>> This is going to be a broader problem as it is also require to migrate
>> ML2OvS to ODL for NFV deployments, pretty much at the same timeline.
>> If i understand correctly, this migration involves stopping services
>> of ML2OVS (like neutron-ovs-agent) and starting the corresponding new
>> ML2 (OVN or ODL), along with few parameter additions and removals.
>>
>> >> - The migration procedure should be part of tripleo ? or can it be
>> a
>> >> standalone ansible script ? (I presume it should be former).
>> Each service has upgrade steps which can be associated via ansible
>> steps. But this is not a service upgrade. It disables an existing
>> service and enables a new service. So I think, it would need an
>> explicit disabled service [1], stop the required service. And enabled
>> the new service.
>>
>> >> - If it should be part of the tripleo then what would be the
>> command to do
>> >> it ? A update stack command with appropriate environment files for
>> OVN ?
>> >> - In case the migration can be done  as a standalone script, how
>> to handle
>> >> later updates/upgrades since tripleo wouldn't be aware of the
>> migration ?
>> >
>> I would also discourage doing it standalone.
>>
>> Another area which needs to be looked is that, should it be associated
>> with containers upgrade? May be OVN and ODL can be migrated as
>> containers only instead of baremetal by default (just a thought, could
>> have implications to be worked/discussed out).
>>
>> Regards,
>> Saravanan KR
>>
>> [1]
>> https://github.com/openstack/tripleo-heat-templates/tree/mas
>> ter/puppet/services/disabled
>> > ster/puppet/services/disabled>
>>
>>  >
>>  > This last point seems like the crux of the discussion here.
>>  Sure, you can
>>  > do all kinds of things to your cloud using standalone bits, but
>> if any of
>>  > them affect things tripleo manages (which this would) then you're
>> going to
>>  > break on the next stack update.
>>  >
>>  > If there are things about the migration that a stack-update can't
>> handle,
>>  > then the migration process would need to be twofold: 1) Run the
>> standalone
>>  > bits to do the migration 2) Update the tripleo configuration to
>> match the
>>  > migrated config so stack-updates work.
>>  >
>>  > This is obviously a complex and error-prone process, so I'd
>> strongly
>>  > encourage doing it in a tripleo-native fashion instead if at all
>> possible.
>>  >
>>
>>
>>
>> Thanks Ben and Saravanan for your comments.
>>
>> I did some testing. I first deployed an overcloud with the command [1]
>> and then I ran the command [2] which enables the OVN services. After the
>> completion of [2], all the neutron agents were stopped and all the OVN
>> services were up.
>>
>> The question is is this the right way to disable some services and enable
>> some ? or "openstack overcloud update stack" is the right command ?
>>
>
> Re-running the deploy command as you did is the right way to change
> configuration.  The update stack command is just for updating packages.
>
>
Thanks Ben for the confirmation.

Numan


>
>>
>> [1] - openstack overcloud deploy \
>>  --templates /usr/share/openstack-tripleo-heat-templates \
>>  --libvirt-type qemu --control-flavor oooq_control --compute

Re: [openstack-dev] [nova] placement/resource providers update 29

2017-07-24 Thread Chris Dent

On Sat, 22 Jul 2017, Matt Riedemann wrote:


On 7/21/2017 6:54 AM, Chris Dent wrote:

## Custom Resource Classes for Ironic

A spec for custom resource classes is being updated to reflect the
need to update the flavor and allocations of a previously allocated
ironic node that how has a custom resource class (such as
CUSTOM_SILVER_IRON):

https://review.openstack.org/#/c/481748/

The implementation of those changes has started at:

https://review.openstack.org/#/c/484949/

That gets the flavor adjustment. Do we also need to do allocation
cleanups or was that already done at some point in the past?


That's done:

https://review.openstack.org/#/c/484935/


It's good that that's done, but that's not quite what I meant. That
will override stuff from elsewhere in the flavor with what's in extra
specs to create a reasonable allocation record.

I meant the case where an existing ironic instance was updated on
the ironic side to be CUSTOM_IRON_GOLD (or whatever) and needs to
have it's previous allocations of VCPU: 2, DISK_GB: 1024, MEMORY_MB:
1024 to replace those with CUSTOM_IRON_GOLD: 1?

a) Is that even a thing? 
b) Do we need to do it with some new code or is it halready

   happening by way of the periodic job?

I guess the code that Ed's working on at

https://review.openstack.org/#/c/484949

need to zero out VCPU etc in the extra specs so that the eventual
allocation record is created in 484935 is correct?

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] internal endpoints vs sanity

2017-07-24 Thread Monty Taylor

On 07/24/2017 04:23 PM, Attila Fazekas wrote:

Thanks for your answer.

The real question is do we agree in the
internalULR usage what suggested in [1] is a bad security practice
and should not be told to operators at all.

Also we should try to get rid off the enpointTypes in keystone v4.

Do we have any good (not just making happy funny dev envs) to keep
endpoint types ?


First step is stopping to use the word "endpoint_type" - it's 
"interface". Also, "internalURL" is legacy keystone v2, it's "internal"


"admin" interface is a thing that people shouldn't use. It's a holdover 
from a day when keystone did weird things. Nobody else should use it ever.


However, we just ADDED the ability to more intelligently consume 
interface so that public/internal cases can be handled better based on 
use-cases from deployers, so I do not think they're going to go away any 
time in the forseeable future.


For instance, we added the ability to pass a list of interface values to 
keystoneauth's endpoint_filter - and these are also now in the adapter 
options so that the default value for "interface" for nova talking to 
neutron can be ['internal', 'public'] - which says: "please default to 
using the internal interface if one exists, otherwise fall back to the 
public interface"


While it may not be a setup that everyone wants, for some deployers 
having a public and internal is important. I know several clouds have 
deployed completely separate API tiers and registered them as "internal" 
so that they could be assured that service-to-service communications 
worked well even if end-users were hammering the public endpoints. Those 
deployers do not seem to mind the RFC1918 showing up in the catalog, and 
if they're doing point-to-point firewalling (as they should be) the 
private addresses should not be considered 'secret' so there's no real 
problem exposing them in the catalog.



On Fri, Jul 21, 2017 at 1:37 PM, Giulio Fidente > wrote:


Only a comment about the status in TripleO

On 07/21/2017 12:40 PM, Attila Fazekas wrote:

[...]

> We should seriously consider using names instead of ip address also
> on the devstack gates to avoid people thinking the catalog entries
> meant to be used with ip address and keystone is a replacement for DNS.

this is configurable, you can have names or ips in the keystone
endpoints ... actually you can chose to use names or ips independently
for each service and even for the different endpoints
(Internal/Admin/Public) of the same service

if an operator, like you suggested, configures the DNS to resolve
different IPs for the same name basing on where the request comes from,
then he can use the same 'hostname' for all Public, Admin and Internal
endpoints which I *think* is what you're suggesting

also using names is the default when ssl is enabled

check environments/ssl/tls-endpoints-public-dns.yaml and note how
EndpointMap can resolve to CLOUDNAME or IP_ADDRESS

adding Juan on CC as he did a great work around this and can help
further
--
Giulio Fidente
GPG KEY: 08D733BA




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] internal endpoints vs sanity

2017-07-24 Thread Dmitry Tantsur
These questions are to the operators, and should be asked on openstack-operators 
IMO (maybe with tuning the overall tone to be a bit less aggressive).


On 07/24/2017 10:23 AM, Attila Fazekas wrote:

Thanks for your answer.

The real question is do we agree in the
internalULR usage what suggested in [1] is a bad security practice
and should not be told to operators at all.

Also we should try to get rid off the enpointTypes in keystone v4.


Let's not seriously talk about keystone v4 at this point, we haven't gotten rid 
of v2 so far.




Do we have any good (not just making happy funny dev envs) to keep
endpoint types ?


I suspect any external SSL termination proxy. And anything else that will make 
the URLs exposed to end users look different from ones exposed to services.


Speaking of DNS, I also suspect there may be a micro-optimization in not making 
the services use it when talking to each other, while still providing names to 
end users.






On Fri, Jul 21, 2017 at 1:37 PM, Giulio Fidente > wrote:


Only a comment about the status in TripleO

On 07/21/2017 12:40 PM, Attila Fazekas wrote:

[...]

> We should seriously consider using names instead of ip address also
> on the devstack gates to avoid people thinking the catalog entries
> meant to be used with ip address and keystone is a replacement for DNS.

this is configurable, you can have names or ips in the keystone
endpoints ... actually you can chose to use names or ips independently
for each service and even for the different endpoints
(Internal/Admin/Public) of the same service

if an operator, like you suggested, configures the DNS to resolve
different IPs for the same name basing on where the request comes from,
then he can use the same 'hostname' for all Public, Admin and Internal
endpoints which I *think* is what you're suggesting

also using names is the default when ssl is enabled

check environments/ssl/tls-endpoints-public-dns.yaml and note how
EndpointMap can resolve to CLOUDNAME or IP_ADDRESS

adding Juan on CC as he did a great work around this and can help further
--
Giulio Fidente
GPG KEY: 08D733BA




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Saravanan KR core

2017-07-24 Thread Dougal Matthews
+1!

On 21 July 2017 at 16:01, Emilien Macchi  wrote:

> Saravanan KR has shown an high level of expertise in some areas of
> TripleO, and also increased his involvement over the last months:
> - Major contributor in DPDK integration
> - Derived parameter works
> - and a lot of other things like improving UX and enabling new
> features to improve performances and networking configurations.
>
> I would like to propose Saravanan part of TripleO core and we expect
> his particular focus on t-h-t, os-net-config and tripleoclient for now
> but we hope to extend it later.
>
> As usual, we'll vote :-)
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] internal endpoints vs sanity

2017-07-24 Thread Attila Fazekas
Thanks for your answer.

The real question is do we agree in the
internalULR usage what suggested in [1] is a bad security practice
and should not be told to operators at all.

Also we should try to get rid off the enpointTypes in keystone v4.

Do we have any good (not just making happy funny dev envs) to keep
endpoint types ?



On Fri, Jul 21, 2017 at 1:37 PM, Giulio Fidente  wrote:

> Only a comment about the status in TripleO
>
> On 07/21/2017 12:40 PM, Attila Fazekas wrote:
>
> [...]
>
> > We should seriously consider using names instead of ip address also
> > on the devstack gates to avoid people thinking the catalog entries
> > meant to be used with ip address and keystone is a replacement for DNS.
>
> this is configurable, you can have names or ips in the keystone
> endpoints ... actually you can chose to use names or ips independently
> for each service and even for the different endpoints
> (Internal/Admin/Public) of the same service
>
> if an operator, like you suggested, configures the DNS to resolve
> different IPs for the same name basing on where the request comes from,
> then he can use the same 'hostname' for all Public, Admin and Internal
> endpoints which I *think* is what you're suggesting
>
> also using names is the default when ssl is enabled
>
> check environments/ssl/tls-endpoints-public-dns.yaml and note how
> EndpointMap can resolve to CLOUDNAME or IP_ADDRESS
>
> adding Juan on CC as he did a great work around this and can help further
> --
> Giulio Fidente
> GPG KEY: 08D733BA
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] [oslo] [osprofiler] [performance] OSprofiler in Swift

2017-07-24 Thread vin...@vn.fujitsu.com
Hello folks,

I'm sending this email for asking about the works in integrating OSprofiler 
into Swift.
Currently I'm working on this and the patches in Swift is waiting for review 
here: https://review.openstack.org/#/c/468316/

We knew that Swift has xprofile module [3] to do profiling works. However, 
OSprofiler is different from xprofile in many aspects [6].
xprofile is a profiling tool, OSprofiler is a distributed tracing tool for 
OpenStack services. We need OSProfiler in Swift for tracing across other 
OpenStack services.

FYI, OSprofiler provides functionality to generate a trace per request, that 
goes through all involve services.
This trace can visualize flow of a request [4] [5].
A trace from OSprofiler can help us know these things:
- Performance bottle-neck of a service
- Trouble-shooting issue in a service
- Understanding flow of a request (from cli client or other client)
- Trace can be store in persistent storage
- Visualization trace flow in many OpenTracing compatible tracer [5] (will 
be done soon)

Hope that it will receive reviews from you all.

Some related references:
[1] OSprofiler documentation: https://docs.openstack.org/osprofiler/latest/
[2] Swift documentation: https://docs.openstack.org/swift/latest/ 
[3] Swift xprofile: 
https://docs.openstack.org/swift/latest/middleware.html#module-swift.common.middleware.xprofile
 
[4] Demo with current OSprofiler patch set in Swift: 
https://tovin07.github.io/swift/swift-object-create.html
[5] A demo with OpenTracing compatible (using Uber Jaeger): 
https://tovin07.github.io/opentracing/jaeger-openstack-image-list.png
[6] Why not cProfile and others? 
https://docs.openstack.org/osprofiler/latest/user/background.html#why-not-cprofile-and-etc
[7] xprofile can use cProfile to profile internal python call: 
https://github.com/openstack/swift/search?utf8=%E2%9C%93&q=cprofile&type=
[8] Some concerns from notmyname: 
http://eavesdrop.openstack.org/irclogs/%23openstack-swift/%23openstack-swift.2017-05-03.log.html#t2017-05-03T15:33:25
[9] Discussions from IRC meeting log: 
http://eavesdrop.openstack.org/meetings/swift/2017/swift.2017-06-14-07.00.log.html#l-152

Best regards,

Vinh Nguyen Trong
PODC - Fujitsu Vietnam Ltd.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Saravanan KR core

2017-07-24 Thread Sanjay Upadhyay
On Mon, Jul 24, 2017 at 12:10 PM, Marios Andreou 
wrote:

>
>
> On Fri, Jul 21, 2017 at 6:01 PM, Emilien Macchi 
> wrote:
>
>> Saravanan KR has shown an high level of expertise in some areas of
>> TripleO, and also increased his involvement over the last months:
>> - Major contributor in DPDK integration
>> - Derived parameter works
>> - and a lot of other things like improving UX and enabling new
>> features to improve performances and networking configurations.
>>
>> I would like to propose Saravanan part of TripleO core and we expect
>> his particular focus on t-h-t, os-net-config and tripleoclient for now
>> but we hope to extend it later.
>>
>> As usual, we'll vote :-)
>> Thanks,
>>
>
>
> +1
>
>
>

+1


> --
>> Emilien Macchi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Bogdan Dobrelya core on TripleO / Containers

2017-07-24 Thread Marios Andreou
On Fri, Jul 21, 2017 at 5:55 PM, Emilien Macchi  wrote:

> Hi,
>
> Bogdan (bogdando on IRC) has been very active in Containerization of
> TripleO and his quality of review has increased over time.
> I would like to give him core permissions on container work in TripleO.
> Any feedback is welcome as usual, we'll vote as a team.
>
>
+1


> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] An experiment with Ansible

2017-07-24 Thread Marios Andreou
On Fri, Jul 21, 2017 at 1:21 AM, James Slagle 
wrote:

> Following up on the previous thread:
> http://lists.openstack.org/pipermail/openstack-dev/2017-July/119405.html
>
> I wanted to share some work I did around the prototype I mentioned
> there. I spent a couple days exploring this idea. I came up with a
> Python script that when run against an in progress Heat stack, will
> pull all the server and deployment metadata out of Heat and generate
> ansible playbooks/tasks from the deployments.
>
> Here's the code:
> https://github.com/slagle/pump
>
> And an example of what gets generated:
> https://gist.github.com/slagle/433ea1bdca7e026ce8ab2c46f4d716a8
>
> If you're interested in any more detail, let me know.
>
> It signals the stack to completion with a dummy "ok" signal so that
> the stack will complete. You can then use ansible-playbook to apply
> the actual deloyments (in the expected order, respecting the steps
> across all roles, and in parallel across all the roles).
>
> Effectively, this treats Heat as nothing but a yaml cruncher. When
> using it with deployed-server, Heat doesn't actually change anything
> on an overcloud node, you're only using it to generate ansible.
>


Hi James,

FYI this actually describes the current plan for Pike minor update [1] -
the idea is to use the "openstack overcloud config download " (matbu++) to
write the playbooks for each node from the deployed stack outputs. The
minor update playbook(s) itself will be generated from new 'update_tasks'
added to each of the service manifests (akin to the current upgrade_tasks).
The plan is to disable the actual service config deployment steps so that
we just get the stack outputs for the playbook generation.

The effort is lead by shardy and he has posted reviews/comments on the
etherpad @ [1] FYI (I know he is away this week so may not respond ++ I was
struck by the similarity between what you described above and the consensus
we seemed to reach towards the end of the week about the minor update plan,
so I thought you and others may be interested to hear it).

Your review @ /#/c/485303/ is slightly different in that it doesn't disable
the deployment/postdeploy steps but signals completion to Heat. Haven't
checked that review in  detail but first concern is can/do you catch it in
time... I mean you start the heat stack update and have to immediately call
the openstack overcloud signal , if I understood correctly?

thanks, marios

[1] https://etherpad.openstack.org/p/tripleo-pike-updates-upgrades



>
> Honestly, I think I will prefer the longer term approach of using
> stack outputs. Although, I am not sure of the end goal of that work
> and if it is the same as this prototype.
>
> And some of what I've done may be useful with that approach as well:
> https://review.openstack.org/#/c/485303/
>
> However, I found this prototype interesting and worth exploring for a
> couple of reasons:
>
> Regardless of the approach we take, I wanted to explore what an end
> result might look like. Personally, this illustrates what I kind of
> had in mind for an "end goal".
>
> I also wanted to see if this was at all feasible. I envisioned some
> hurdles, such as deployments depending on output values of previous
> deployments, but we actually only do that in 1 place in
> tripleo-heat-templates, and I was able to workaround that. In the end
> I used it to deploy an all in one overcloud equivalent to our
> multinode CI job, so I believe it's feasible.
>
> It meets most of the requirements we're looking to get out of ansible.
> You can (re)apply just a single deployment, or a given deployment
> across all ResourceGroup members, or all deployments for a given
> server(s), it's easy to see what failed and for what servers, etc.
>
> FInally, It's something we could deliver  without much (any?) change
> in tripleo-heat-templates. Although I'm not trying to say it'd be a
> small amount of work to even do that, as this is a very rough
> prototype.
>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Monty Taylor

On 07/24/2017 08:05 AM, Michael Bayer wrote:

On Sun, Jul 23, 2017 at 6:10 PM, Jay Pipes  wrote:

Glad you brought this up, Mike. I was going to start a thread about this.
Comments inline.

On 07/23/2017 05:02 PM, Michael Bayer wrote:
Well, besides that point (which I agree with), that is attempting to change
an existing database schema migration, which is a no-no in my book ;)



OK this point has come up before and I always think there's a bit of
an over-broad kind of purism being applied (kind of like when someone
says, "never use global variables ever!" and I say, "OK so sys.modules
is out right ?" :)  ).

I agree with "never change a migration" to the extent that you should
never change the *SQL steps emitted for a database migration*.  That
is, if your database migration emitted "ALTER TABLE foobar foobar
foobar" on a certain target databse, that should never change.   No
problem there.

However, what we're doing here is adding new datatype logic for the
NDB backend which are necessarily different; not to mention that NDB
requires more manipulation of constraints to make certain changes
happen.  To make all that work, the *Python code that emits the SQL
for the migration* needs to have changes made, mostly (I would say
completely) in the form of new conditionals for NDB-specific concepts.
In the case of the datatypes, the migrations will need to refer to
a SQLAlchemy type object that's been injected with the ndb-specific
logic when the NDB backend is present; I've made sure that when the
NDB backend is *not* present, the datatypes behave exactly the same as
before.

So basically, *SQL steps do not change*, but *Python code that emits
the SQL steps* necessarily has to change to accomodate for when the
"ndb" flag is present - this because these migrations have to run on
brand new ndb installations in order to create the database.   If Nova
and others did the initial "create database" without using the
migration files and instead used a create_all(), things would be
different, but that's not how things work (and also it is fine that
the migrations are used to build up the DB).

There is also the option to override the compilation for the base
SQLAlchemy String type does so that no change at all would be needed
to consuming projects in this area, but it seems like there is a need
to specify ndb-specific length arguments in some cases so keeping the
oslo_db-level API seems like it would be best.  (Note that the ndb
module in oslo_db *does* instrument the CreateTable construct globally
however, though it is very careful not to be involved unless the ndb
flag is present).


I guess the sitution is that if one is not using the ndb flag, the 
python logic results in no SQL differences. And before these changes are 
made it's not possible to run with the ndb flag - so there should be no 
people for whom this is behavioral difference, right? (like, it's not 
like we're going to have a person using the ndb flag missing an ndb 
specific length somewhere because they ran the migrations before the 
python logic was fixed, right?)





I can add these names up to oslo.db and then we would just need to
spread these out through all the open ndb reviews and then also patch
up Cinder which seems to be the only ndb implementation that's been
merged so far.



+1


Keep in mind this is really me trying to correct my own mistake, as I
helped design and approved of the original approach here where
projects would be consuming against the "ndb." namespace.  However,
after seeing it in reviews how prevalent the use of this extremely
backend-specific name is, I think the use of the name should be much
less frequent throughout projects and only surrounding logic that is
purely to do with the ndb backend and no others.   At the datatype
level, the chance of future naming conflicts is very high and we
should fix this mistake (my mistake) before it gets committed
throughout many downstream projects.



I had a private conversation with Octave on Friday. I had mentioned that I
was upset I didn't know about the series of patches to oslo.db that added
that module. I would certainly have argued against that approach. Please
consider hitting me with a cluestick next time something of this nature pops
up. :)

Also, as I told Octave, I have no problem whatsoever with NDB Cluster. I
actually think it's a pretty brilliant piece of engineering -- and have for
over a decade since I worked at MySQL.

My complaint regarding the code patch proposed to Nova was around the
hard-coding of the ndb namespace into the model definitions.

Best,
-jay



[1] https://review.openstack.org/#/c/427970/

[2] https://review.openstack.org/#/c/446643/

[3] https://review.openstack.org/#/c/446136/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/o