Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-26 Thread Michael Bayer
On Jul 25, 2017 3:38 PM, "Octave J. Orgeron" <octave.orge...@oracle.com>
wrote:

Hi Michael,

I understand that you want to abstract this completely away inside of
oslo.db. However, the reality is that making column changes based purely on
the size and type of that column, without understanding what that column is
being used for is extremely dangerous. You could end up clobbering a column
that needs a specific length for a value,



Nowhere in my example is the current length truncated.   Also, if two
distinct lengths truly must be maintained we add a field "minimum_length".


prevent

 an index from working, etc.


That's what the indexable flag would achieve.

It

 wouldn't make sense to just do global changes on a column based on the
size.


This seems to be what your patches are doing, however.


There are far more tables that fit in both InnoDB and NDB already than
those that don't. As I've stated many times before, the columns that I make
changes to are evaluated to understand:

1. What populates it?
2. Who consumes it?
3. What are the possible values and required lengths?
4. What is the impact of changing the size or type?
5. Evaluated against the other columns in the table, which one makes the
most sense to adjust?

I don't see a way of automating that and making it maintainable without a
lot more overhead in code and people.


My proposal is intended to *reduce* the great verbosity in the current
patches I see and remove the burden of every project having to be aware of
"ndb" every time a column is added.


If

 we really want to remove the complexity here, why don't we just change the
sizes and types on these handful of table columns so that they fit within
both InnoDB and NDB?



Because that requires new migrations which are a great risk and
inconvenience to projects.

That

 way we don't need these functions and the tables are exactly the same?
That would only leave us with the createtable, savepoint/rollback, etc.
stuff to address which is already taken care of in the ndb module in
oslo.db? Then we just fix the foreign key stuff as I've been doing, since
it has zero impact on InnoDB deployments and if anything ensures things are
consistent. That would then leave us to really focus on fixing migrations
to use oslo.db and pass the correct flags, which is a more lengthy process
than the rest of this.

I don't see the point in trying to make this stuff anymore complicated.


The proposal is to make it simpler than it is right now.

Run though every column change youve proposed and show me which ones don't
fit into my proposed ruleset.   I will add additional declarative flags to
ensure those use cases are covered.





Octave


On 7/25/2017 12:20 PM, Michael Bayer wrote:

> On Mon, Jul 24, 2017 at 5:41 PM, Michael Bayer <mba...@redhat.com> wrote:
>
>> oslo_db.sqlalchemy.String(255, ndb_type=TINYTEXT) -> VARCHAR(255) for most
>>> dbs, TINYTEXT for ndb
>>> oslo_db.sqlalchemy.String(4096, ndb_type=TEXT) -> VARCHAR(4096) for most
>>> dbs, TEXT for ndb
>>> oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on most dbs,
>>> VARCHAR(64) on ndb
>>>
>>> This way, we can override the String with TINYTEXT or TEXT or change the
>>> size for ndb.
>>>
>>>> oslo_db.sqlalchemy.String(255) -> VARCHAR(255) on most dbs,
>>>> TINYTEXT() on ndb
>>>> oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on
>>>> most dbs, VARCHAR(64) on ndb
>>>> oslo_db.sqlalchemy.String(50) -> VARCHAR(50) on all dbs
>>>> oslo_db.sqlalchemy.String(64) -> VARCHAR(64) on all dbs
>>>> oslo_db.sqlalchemy.String(80) -> VARCHAR(64) on most dbs, TINYTEXT()
>>>> on ndb
>>>> oslo_db.sqlalchemy.String(80, ndb_size=55) -> VARCHAR(64) on most
>>>> dbs, VARCHAR(55) on ndb
>>>>
>>>> don't worry about implementation, can the above declaration ->
>>>> datatype mapping work ?
>>>>
>>>>
>>>> In my patch for Neutron, you'll see a lot of the AutoStringText() calls
>>> to
>>> replace exceptionally long String columns (4096, 8192, and larger).
>>>
>> MySQL supports large VARCHAR now, OK.   yeah this could be
>> String(8192, ndb_type=TEXT) as well.
>>
> OK, no, sorry each time I think of this I keep seeing the verbosity of
> imports etc. in the code, because if we had:
>
> String(80, ndb_type=TEXT)
>
> then we have to import both String and TEXT, and then what if there's
> ndb.TEXT, the code is still making an ndb-specific decision, etc.
>
> I still see that this can be mostly automated from a simple ruleset
> based on the size:
>
> length <= 64 :VARCHAR(length) on all backend

Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-25 Thread Michael Bayer
On Mon, Jul 24, 2017 at 5:41 PM, Michael Bayer <mba...@redhat.com> wrote:
>> oslo_db.sqlalchemy.String(255, ndb_type=TINYTEXT) -> VARCHAR(255) for most
>> dbs, TINYTEXT for ndb
>> oslo_db.sqlalchemy.String(4096, ndb_type=TEXT) -> VARCHAR(4096) for most
>> dbs, TEXT for ndb
>> oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on most dbs,
>> VARCHAR(64) on ndb
>>
>> This way, we can override the String with TINYTEXT or TEXT or change the
>> size for ndb.
>
>>>
>>> oslo_db.sqlalchemy.String(255) -> VARCHAR(255) on most dbs,
>>> TINYTEXT() on ndb
>>> oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on
>>> most dbs, VARCHAR(64) on ndb
>>> oslo_db.sqlalchemy.String(50) -> VARCHAR(50) on all dbs
>>> oslo_db.sqlalchemy.String(64) -> VARCHAR(64) on all dbs
>>> oslo_db.sqlalchemy.String(80) -> VARCHAR(64) on most dbs, TINYTEXT()
>>> on ndb
>>> oslo_db.sqlalchemy.String(80, ndb_size=55) -> VARCHAR(64) on most
>>> dbs, VARCHAR(55) on ndb
>>>
>>> don't worry about implementation, can the above declaration ->
>>> datatype mapping work ?
>>>
>>>
>> In my patch for Neutron, you'll see a lot of the AutoStringText() calls to
>> replace exceptionally long String columns (4096, 8192, and larger).
>
> MySQL supports large VARCHAR now, OK.   yeah this could be
> String(8192, ndb_type=TEXT) as well.

OK, no, sorry each time I think of this I keep seeing the verbosity of
imports etc. in the code, because if we had:

String(80, ndb_type=TEXT)

then we have to import both String and TEXT, and then what if there's
ndb.TEXT, the code is still making an ndb-specific decision, etc.

I still see that this can be mostly automated from a simple ruleset
based on the size:

length <= 64 :VARCHAR(length) on all backends
length > 64, length <= 255:   VARCHAR(length) for most backends,
TINYTEXT for ndb
length > 4096:  VARCHAR(length) for most backends, TEXT for ndb

the one case that seems outside of this is:

String(255)  where they have an index or key on the VARCHAR, and in
fact they only need < 64 characters to be indexed.  In that case you
don't want to use TINYTEXT, right?   So one exception:

oslo_db.sqlalchemy.types.String(255, indexable=True)

e.g. a declarative hint to the oslo_db backend to not use a LOB type.

then we just need oslo_db.sqlalchemy.types.String, and virtually
nothing except the import has to change, and a few keywords.

What we're trying to do in oslo_db is as much as possible state the
intent of a structure or datatype declaratively, and leave as much of
the implementation up to oslo_db itself.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Michael Bayer
On Mon, Jul 24, 2017 at 5:10 PM, Octave J. Orgeron
 wrote:
> I don't think it makes sense to make these global. We don't need to change
> all occurrences of String(255) to TinyText for example. We make that
> determination through understanding the table structure and usage. But I do
> like the idea of changing the second option to ndb_size=, I think that makes
> things very clear. If you want to collapse the use cases.. what about?:
>
> oslo_db.sqlalchemy.String(255, ndb_type=TINYTEXT) -> VARCHAR(255) for most
> dbs, TINYTEXT for ndb
> oslo_db.sqlalchemy.String(4096, ndb_type=TEXT) -> VARCHAR(4096) for most
> dbs, TEXT for ndb
> oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on most dbs,
> VARCHAR(64) on ndb
>
> This way, we can override the String with TINYTEXT or TEXT or change the
> size for ndb.

OK.   See, originally when I was pushing for an ndb "dialect", that
hook lets us say String(255).with_variant(TEXT, "ndb") which is what I
was going for originally.  However, since we went with a special flag
and not a dialect, using ndb_type / ndb_size is *probably* fine.


>
>>
>> oslo_db.sqlalchemy.String(255) -> VARCHAR(255) on most dbs,
>> TINYTEXT() on ndb
>> oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on
>> most dbs, VARCHAR(64) on ndb
>> oslo_db.sqlalchemy.String(50) -> VARCHAR(50) on all dbs
>> oslo_db.sqlalchemy.String(64) -> VARCHAR(64) on all dbs
>> oslo_db.sqlalchemy.String(80) -> VARCHAR(64) on most dbs, TINYTEXT()
>> on ndb
>> oslo_db.sqlalchemy.String(80, ndb_size=55) -> VARCHAR(64) on most
>> dbs, VARCHAR(55) on ndb
>>
>> don't worry about implementation, can the above declaration ->
>> datatype mapping work ?
>>
>> Also where are we using AutoStringText(), it sounds like this is just
>> what SQLAlchemy calls the Text() datatype?   (e.g. an unlengthed
>> string type, comes out as CLOB etc).
>>
> In my patch for Neutron, you'll see a lot of the AutoStringText() calls to
> replace exceptionally long String columns (4096, 8192, and larger).

MySQL supports large VARCHAR now, OK.   yeah this could be
String(8192, ndb_type=TEXT) as well.


>
>
>
>
>>
>>
>>> In many cases, the use of these could be removed by simply changing the
>>> columns to more appropriate types and sizes. There is a tremendous amount
>>> of
>>> wasted space in many of the databases. I'm more than willing to help out
>>> with this if teams decide they would rather do that instead as the
>>> long-term
>>> solution. Until then, these functions enable the use of both with minimal
>>> impact.
>>>
>>> Another thing to keep in mind is that the only services that I've had to
>>> adjust column sizes for are:
>>>
>>> Cinder
>>> Neutron
>>> Nova
>>> Magnum
>>>
>>> The other services that I'm working on like Keystone, Barbican, Murano,
>>> Glance, etc. only need changes to:
>>>
>>> 1. Ensure that foreign keys are dropped and created in the correct order
>>> when changing things like indexes, constraints, etc. Many services do
>>> these
>>> proper steps already, there are just cases where this has been missed
>>> because InnoDB is very forgiving on this. But other databases are not.
>>> 2. Fixing the database migration and sync operations to use oslo.db, pass
>>> the right parameters, etc. Something that should have been done in the
>>> first
>>> place, but hasn't. So this more of a house cleaning step to insure that
>>> services are using oslo.db correctly.
>>>
>>> The only other oddball use case is deal with disabling nested
>>> transactions,
>>> where Neutron is the only one that does this.
>>>
>>> On the flip side, here is a short list of services that I haven't had to
>>> make ANY changes for other than having oslo.db 4.24 or above:
>>>
>>> aodh
>>> gnocchi
>>> heat
>>> ironic
>>> manila
>>>
 3. it's not clear (I don't even know right now by looking at these
 reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
 For example in

 https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py
 I see a list of String(255)'s changed to one type or the other without
 any clear notion why one would use one or the other.  Having names
 that define simply the declared nature of the type would be most
 appropriate.
>>>
>>>
>>> One has to look at what the column is being used for and decide what
>>> appropriate remediation steps are. This takes time and one must research
>>> what kind of data goes in the column, what puts it there, what consumes
>>> it,
>>> and what remediation would have the least amount of impact.
>>>
 I can add these names up to oslo.db and then we would just need to
 spread these out through all the open ndb reviews and then also patch
 up Cinder which seems to be the only ndb implementation that's been
 merged so far.

 Keep in mind this is really me trying to correct my own mistake, as I
 helped design and approved of the 

Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Michael Bayer
On Mon, Jul 24, 2017 at 3:37 PM, Octave J. Orgeron
 wrote:
> For these, here is a brief synopsis:
>
> AutoStringTinyText, will convert a column to the TinyText type. This is used
> for cases where a 255 varchar string needs to be converted to a text blob to
> make the row fit within the NDB limits. If you are using ndb, it'll convert
> it to TinyText, otherwise it leaves it alone. The reason that TinyText type
> was chosen is because it'll hold the same 255 varchars and saves on space.
>
> AutoStringText, does the same as the above, but converts the type to Text
> and is meant for use cases where you need more than 255 varchar worth of
> space. Good examples of these uses are where outputs of hypervisor and OVS
> commands are dumped into the database.
>
> AutoStringSize, you pass two parameters, one being the non-NDB size and the
> second being the NDB size. The point here is where you need to reduce the
> size of the column to fit within the NDB limits, but you want to preserve
> the String varchar type because it might be used in a key, index, etc. I
> only use these in cases where the impacts are very low.. for example where a
> column is used for keeping track of status (up, down, active, inactive,
> etc.) that don't require 255 varchars.

Can the "auto" that is supplied by AutoStringTinyText and
AutoStringSize be merged?


oslo_db.sqlalchemy.String(255) -> VARCHAR(255) on most dbs,
TINYTEXT() on ndb
oslo_db.sqlalchemy.String(255, ndb_size=64) -> VARCHAR(255) on
most dbs, VARCHAR(64) on ndb
oslo_db.sqlalchemy.String(50) -> VARCHAR(50) on all dbs
oslo_db.sqlalchemy.String(64) -> VARCHAR(64) on all dbs
oslo_db.sqlalchemy.String(80) -> VARCHAR(64) on most dbs, TINYTEXT() on ndb
oslo_db.sqlalchemy.String(80, ndb_size=55) -> VARCHAR(64) on most
dbs, VARCHAR(55) on ndb

don't worry about implementation, can the above declaration ->
datatype mapping work ?

Also where are we using AutoStringText(), it sounds like this is just
what SQLAlchemy calls the Text() datatype?   (e.g. an unlengthed
string type, comes out as CLOB etc).




>
> In many cases, the use of these could be removed by simply changing the
> columns to more appropriate types and sizes. There is a tremendous amount of
> wasted space in many of the databases. I'm more than willing to help out
> with this if teams decide they would rather do that instead as the long-term
> solution. Until then, these functions enable the use of both with minimal
> impact.
>
> Another thing to keep in mind is that the only services that I've had to
> adjust column sizes for are:
>
> Cinder
> Neutron
> Nova
> Magnum
>
> The other services that I'm working on like Keystone, Barbican, Murano,
> Glance, etc. only need changes to:
>
> 1. Ensure that foreign keys are dropped and created in the correct order
> when changing things like indexes, constraints, etc. Many services do these
> proper steps already, there are just cases where this has been missed
> because InnoDB is very forgiving on this. But other databases are not.
> 2. Fixing the database migration and sync operations to use oslo.db, pass
> the right parameters, etc. Something that should have been done in the first
> place, but hasn't. So this more of a house cleaning step to insure that
> services are using oslo.db correctly.
>
> The only other oddball use case is deal with disabling nested transactions,
> where Neutron is the only one that does this.
>
> On the flip side, here is a short list of services that I haven't had to
> make ANY changes for other than having oslo.db 4.24 or above:
>
> aodh
> gnocchi
> heat
> ironic
> manila
>
>>
>> 3. it's not clear (I don't even know right now by looking at these
>> reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
>> For example in
>> https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py
>> I see a list of String(255)'s changed to one type or the other without
>> any clear notion why one would use one or the other.  Having names
>> that define simply the declared nature of the type would be most
>> appropriate.
>
>
> One has to look at what the column is being used for and decide what
> appropriate remediation steps are. This takes time and one must research
> what kind of data goes in the column, what puts it there, what consumes it,
> and what remediation would have the least amount of impact.
>
>>
>> I can add these names up to oslo.db and then we would just need to
>> spread these out through all the open ndb reviews and then also patch
>> up Cinder which seems to be the only ndb implementation that's been
>> merged so far.
>>
>> Keep in mind this is really me trying to correct my own mistake, as I
>> helped design and approved of the original approach here where
>> projects would be consuming against the "ndb." namespace.  However,
>> after seeing it in reviews how prevalent the use of this extremely
>> backend-specific name is, I think the use 

Re: [openstack-dev] [all] [oslo.db] [relational database users] heads up for a MariaDB issue that will affect most projects

2017-07-24 Thread Michael Bayer
hey good news, the owner of the issue upstream found that the SQL
standard agrees with my proposed behavior.   So while this is current
MariaDB 10.2 / 10.3 behavior, hopefully it will be resolved in an
upcoming release within those series.   not sure of the timing though
so we may not be able to duck it.

On Mon, Jul 24, 2017 at 11:16 AM, Michael Bayer <mba...@redhat.com> wrote:
> On Mon, Jul 24, 2017 at 10:37 AM, Doug Hellmann <d...@doughellmann.com> wrote:
>> Excerpts from Michael Bayer's message of 2017-07-23 16:39:20 -0400:
>>> Hey list -
>>>
>>> It appears that MariaDB as of version 10.2 has made an enhancement
>>> that overall is great and fairly historic in the MySQL community,
>>> they've made CHECK constraints finally work.   For all of MySQL's
>>> existence, you could emit a CREATE TABLE statement that included CHECK
>>> constraint, but the CHECK phrase would be silently ignored; there are
>>> no actual CHECK constraints in MySQL.
>>>
>>> Mariadb 10.2 has now made CHECK do something!  However!  the bad news!
>>>  They have decided that the CHECK constraint against a single column
>>> should not be implicitly dropped if you drop the column [1].   In case
>>> you were under the impression your SQLAlchemy / oslo.db project
>>> doesn't use CHECK constraints, if you are using the SQLAlchemy Boolean
>>> type, or the "ENUM" type without using MySQL's native ENUM feature
>>> (less likely), there's a simple CHECK constraint in there.
>>>
>>> So far the Zun project has reported the first bug on Alembic [2] that
>>> they can't emit a DROP COLUMN for a boolean column.In [1] I've
>>> made my complete argument for why this decision on the MariaDB side is
>>> misguided.   However, be on the lookout for boolean columns that can't
>>> be DROPPED on some environments using newer MariaDB.  Workarounds for
>>> now include:
>>>
>>> 1. when using Boolean(), set create_constraint=False
>>>
>>> 2. when using Boolean(), make sure it has a "name" to give the
>>> constraint, so that later you can DROP CONSTRAINT easily
>>>
>>> 3. if not doing #1 and #2, in order to drop the column you need to use
>>> the inspector (e.g. from sqlalchemy import inspect; inspector =
>>> inspect(engine)) and locate all the CHECK constraints involving the
>>> target column, and then drop them by name.
>>
>> Item 3 sounds like the description of a helper function we could add to
>> oslo.db for use in migration scripts.
>
> OK let me give a little bit more context, that if MariaDB holds steady
> here, I will have to implement #3 within Alembic itself (though yes,
> for SQLAlchemy-migrate, still needed :) ). MS SQL Server has the
> same limitation for CHECK constraints and Alembic provides for a
> SQL-only procedure that can run as a static SQL element on that
> backend; hopefully the same is possible for MySQL.
>
>
>
>>
>> Doug
>>
>>>
>>> [1] https://jira.mariadb.org/browse/MDEV-4
>>>
>>> [2] 
>>> https://bitbucket.org/zzzeek/alembic/issues/440/cannot-drop-boolean-column-in-mysql
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo.db] [relational database users] heads up for a MariaDB issue that will affect most projects

2017-07-24 Thread Michael Bayer
On Mon, Jul 24, 2017 at 10:37 AM, Doug Hellmann <d...@doughellmann.com> wrote:
> Excerpts from Michael Bayer's message of 2017-07-23 16:39:20 -0400:
>> Hey list -
>>
>> It appears that MariaDB as of version 10.2 has made an enhancement
>> that overall is great and fairly historic in the MySQL community,
>> they've made CHECK constraints finally work.   For all of MySQL's
>> existence, you could emit a CREATE TABLE statement that included CHECK
>> constraint, but the CHECK phrase would be silently ignored; there are
>> no actual CHECK constraints in MySQL.
>>
>> Mariadb 10.2 has now made CHECK do something!  However!  the bad news!
>>  They have decided that the CHECK constraint against a single column
>> should not be implicitly dropped if you drop the column [1].   In case
>> you were under the impression your SQLAlchemy / oslo.db project
>> doesn't use CHECK constraints, if you are using the SQLAlchemy Boolean
>> type, or the "ENUM" type without using MySQL's native ENUM feature
>> (less likely), there's a simple CHECK constraint in there.
>>
>> So far the Zun project has reported the first bug on Alembic [2] that
>> they can't emit a DROP COLUMN for a boolean column.In [1] I've
>> made my complete argument for why this decision on the MariaDB side is
>> misguided.   However, be on the lookout for boolean columns that can't
>> be DROPPED on some environments using newer MariaDB.  Workarounds for
>> now include:
>>
>> 1. when using Boolean(), set create_constraint=False
>>
>> 2. when using Boolean(), make sure it has a "name" to give the
>> constraint, so that later you can DROP CONSTRAINT easily
>>
>> 3. if not doing #1 and #2, in order to drop the column you need to use
>> the inspector (e.g. from sqlalchemy import inspect; inspector =
>> inspect(engine)) and locate all the CHECK constraints involving the
>> target column, and then drop them by name.
>
> Item 3 sounds like the description of a helper function we could add to
> oslo.db for use in migration scripts.

OK let me give a little bit more context, that if MariaDB holds steady
here, I will have to implement #3 within Alembic itself (though yes,
for SQLAlchemy-migrate, still needed :) ). MS SQL Server has the
same limitation for CHECK constraints and Alembic provides for a
SQL-only procedure that can run as a static SQL element on that
backend; hopefully the same is possible for MySQL.



>
> Doug
>
>>
>> [1] https://jira.mariadb.org/browse/MDEV-4
>>
>> [2] 
>> https://bitbucket.org/zzzeek/alembic/issues/440/cannot-drop-boolean-column-in-mysql
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-24 Thread Michael Bayer
On Mon, Jul 24, 2017 at 10:01 AM, Jay Pipes  wrote:

> I would much prefer to *add* a brand new schema migration that handles
> conversion of the entire InnoDB schema at a certain point to an
> NDB-compatible one *after* that point. That way, we isolate the NDB changes
> to one specific schema migration -- and can point users to that one specific
> migration in case bugs arise. This is the reason that every release we add a
> number of "placeholder" schema migration numbered files to handle situations
> such as these.
>
> I understand that Oracle wants to support older versions of OpenStack in
> their distribution and that's totally cool with me. But, the proper way IMHO
> to do this kind of thing is to take one of the placeholder migrations and
> use that as the NDB-conversion migration. I would posit that since Oracle
> will need to keep some not-insignificant amount of Python code in their
> distribution fork of Nova in order to bring in the oslo.db and Nova NDB
> support, that it will actually be *easier* for them to maintain a *separate*
> placeholder schema migration for all NDB conversion work instead of changing
> an existing schema migration with a new patch.

OK, if it is feasible for the MySQL engine to build out the whole
schema as InnoDB and then do a migrate that changes the storage engine
of all tables to NDB and then also changes all the datatypes, that can
work.   If you want to go that way, then fine.

However, I may be missing something but I'm not seeing the practical
difference.   This new "ndb" migration still goes into the source
tree, still gets invoked for all users, and if the "if ndb_enabled()"
flag is somehow broken, it breaks just as well if it's in a brand new
migration vs. if it's in an old migration.

Suppose "if ndb_enabled(engine)" is somehow broken.  Either it crashes
the migrations, or it runs inappropriately.

If the conditional is in a brand new migration file that's pushed out
in Queens, *everybody* runs it when they upgrade, as well as when they
do fresh installation, and they get the breakage.

if the conditional is in havana 216, *everybody* gets it when they do
a fresh installation, and they get the breakage.   Upgraders do not.

How is "new migration" better than "make old migration compatible" ?

Again, fine by me if the other approach works, I'm just trying to see
where I'm being dense here.

Keep in mind that existing migrations *do* break and have to be fixed
- because while the migration files don't change, the databases they
talk to do.  The other thread I introduced about Mariadb 10.2 now
refusing to DROP columns that have a CHECK constraint is an example,
and will likely mean lots of old migration files across openstack
projects will need adjustments.








>
> All the best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-23 Thread Michael Bayer
On Sun, Jul 23, 2017 at 6:10 PM, Jay Pipes <jaypi...@gmail.com> wrote:
> Glad you brought this up, Mike. I was going to start a thread about this.
> Comments inline.
>
> On 07/23/2017 05:02 PM, Michael Bayer wrote:
> Well, besides that point (which I agree with), that is attempting to change
> an existing database schema migration, which is a no-no in my book ;)


OK this point has come up before and I always think there's a bit of
an over-broad kind of purism being applied (kind of like when someone
says, "never use global variables ever!" and I say, "OK so sys.modules
is out right ?" :)  ).

I agree with "never change a migration" to the extent that you should
never change the *SQL steps emitted for a database migration*.  That
is, if your database migration emitted "ALTER TABLE foobar foobar
foobar" on a certain target databse, that should never change.   No
problem there.

However, what we're doing here is adding new datatype logic for the
NDB backend which are necessarily different; not to mention that NDB
requires more manipulation of constraints to make certain changes
happen.  To make all that work, the *Python code that emits the SQL
for the migration* needs to have changes made, mostly (I would say
completely) in the form of new conditionals for NDB-specific concepts.
   In the case of the datatypes, the migrations will need to refer to
a SQLAlchemy type object that's been injected with the ndb-specific
logic when the NDB backend is present; I've made sure that when the
NDB backend is *not* present, the datatypes behave exactly the same as
before.

So basically, *SQL steps do not change*, but *Python code that emits
the SQL steps* necessarily has to change to accomodate for when the
"ndb" flag is present - this because these migrations have to run on
brand new ndb installations in order to create the database.   If Nova
and others did the initial "create database" without using the
migration files and instead used a create_all(), things would be
different, but that's not how things work (and also it is fine that
the migrations are used to build up the DB).

There is also the option to override the compilation for the base
SQLAlchemy String type does so that no change at all would be needed
to consuming projects in this area, but it seems like there is a need
to specify ndb-specific length arguments in some cases so keeping the
oslo_db-level API seems like it would be best.  (Note that the ndb
module in oslo_db *does* instrument the CreateTable construct globally
however, though it is very careful not to be involved unless the ndb
flag is present).




>
>> I can add these names up to oslo.db and then we would just need to
>> spread these out through all the open ndb reviews and then also patch
>> up Cinder which seems to be the only ndb implementation that's been
>> merged so far.
>
>
> +1
>
>> Keep in mind this is really me trying to correct my own mistake, as I
>> helped design and approved of the original approach here where
>> projects would be consuming against the "ndb." namespace.  However,
>> after seeing it in reviews how prevalent the use of this extremely
>> backend-specific name is, I think the use of the name should be much
>> less frequent throughout projects and only surrounding logic that is
>> purely to do with the ndb backend and no others.   At the datatype
>> level, the chance of future naming conflicts is very high and we
>> should fix this mistake (my mistake) before it gets committed
>> throughout many downstream projects.
>
>
> I had a private conversation with Octave on Friday. I had mentioned that I
> was upset I didn't know about the series of patches to oslo.db that added
> that module. I would certainly have argued against that approach. Please
> consider hitting me with a cluestick next time something of this nature pops
> up. :)
>
> Also, as I told Octave, I have no problem whatsoever with NDB Cluster. I
> actually think it's a pretty brilliant piece of engineering -- and have for
> over a decade since I worked at MySQL.
>
> My complaint regarding the code patch proposed to Nova was around the
> hard-coding of the ndb namespace into the model definitions.
>
> Best,
> -jay
>
>>
>> [1] https://review.openstack.org/#/c/427970/
>>
>> [2] https://review.openstack.org/#/c/446643/
>>
>> [3] https://review.openstack.org/#/c/446136/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman

[openstack-dev] [oslo.db] [ndb] ndb namespace throughout openstack projects

2017-07-23 Thread Michael Bayer
I've been working with Octave Oregon in assisting with new rules and
datatypes that would allow projects to support the NDB storage engine
with MySQL.

To that end, we've made changes to oslo.db in [1] to support this, and
there are now a bunch of proposals such as [2] [3] to implement new
ndb-specific structures in projects.

The reviews for all downstream projects except Cinder are still under
review. While we have a chance to avoid a future naming problem, I am
making the following proposal:

Rather than having all the projects make use of
oslo_db.sqlalchemy.ndb.AutoStringTinyText / AutoStringSize, we add new
generic types to oslo.db :

oslo_db.sqlalchemy.types.SmallString
oslo_db.sqlalchemy.types.String

(or similar )

Internally, the ndb module would be mapping its implementation for
AutoStringTinyText and AutoStringSize to these types.   Functionality
would be identical, just the naming convention exported to downstream
consuming projects would no longer refer to "ndb." for
datatypes.

Reasons for doing so include:

1. openstack projects should be relying upon oslo.db to make the best
decisions for any given database backend, hardcoding as few
database-specific details as possible.   While it's unavoidable that
migration files will have some "if ndb:" kinds of blocks, for the
datatypes themselves, the "ndb." namespace defeats extensibility.  if
IBM wanted Openstack to run on DB2 (again?) and wanted to add a
"db2.String" implementation to oslo.db for example, the naming and
datatypes would need to be opened up as above in any case;  might as
well make the change now before the patch sets are merged.

2. The names "AutoStringTinyText" and "AutoStringSize" themselves are
confusing and inconsistent w/ each other (e.g. what is "auto"?  one is
"auto" if its String or TinyText and the other is "auto" if its
String, and..."size"?)

3. it's not clear (I don't even know right now by looking at these
reviews) when one would use "AutoStringTinyText" or "AutoStringSize".
For example in 
https://review.openstack.org/#/c/446643/10/nova/db/sqlalchemy/migrate_repo/versions/216_havana.py
I see a list of String(255)'s changed to one type or the other without
any clear notion why one would use one or the other.  Having names
that define simply the declared nature of the type would be most
appropriate.

I can add these names up to oslo.db and then we would just need to
spread these out through all the open ndb reviews and then also patch
up Cinder which seems to be the only ndb implementation that's been
merged so far.

Keep in mind this is really me trying to correct my own mistake, as I
helped design and approved of the original approach here where
projects would be consuming against the "ndb." namespace.  However,
after seeing it in reviews how prevalent the use of this extremely
backend-specific name is, I think the use of the name should be much
less frequent throughout projects and only surrounding logic that is
purely to do with the ndb backend and no others.   At the datatype
level, the chance of future naming conflicts is very high and we
should fix this mistake (my mistake) before it gets committed
throughout many downstream projects.


[1] https://review.openstack.org/#/c/427970/

[2] https://review.openstack.org/#/c/446643/

[3] https://review.openstack.org/#/c/446136/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [oslo.db] [relational database users] heads up for a MariaDB issue that will affect most projects

2017-07-23 Thread Michael Bayer
Hey list -

It appears that MariaDB as of version 10.2 has made an enhancement
that overall is great and fairly historic in the MySQL community,
they've made CHECK constraints finally work.   For all of MySQL's
existence, you could emit a CREATE TABLE statement that included CHECK
constraint, but the CHECK phrase would be silently ignored; there are
no actual CHECK constraints in MySQL.

Mariadb 10.2 has now made CHECK do something!  However!  the bad news!
 They have decided that the CHECK constraint against a single column
should not be implicitly dropped if you drop the column [1].   In case
you were under the impression your SQLAlchemy / oslo.db project
doesn't use CHECK constraints, if you are using the SQLAlchemy Boolean
type, or the "ENUM" type without using MySQL's native ENUM feature
(less likely), there's a simple CHECK constraint in there.

So far the Zun project has reported the first bug on Alembic [2] that
they can't emit a DROP COLUMN for a boolean column.In [1] I've
made my complete argument for why this decision on the MariaDB side is
misguided.   However, be on the lookout for boolean columns that can't
be DROPPED on some environments using newer MariaDB.  Workarounds for
now include:

1. when using Boolean(), set create_constraint=False

2. when using Boolean(), make sure it has a "name" to give the
constraint, so that later you can DROP CONSTRAINT easily

3. if not doing #1 and #2, in order to drop the column you need to use
the inspector (e.g. from sqlalchemy import inspect; inspector =
inspect(engine)) and locate all the CHECK constraints involving the
target column, and then drop them by name.

[1] https://jira.mariadb.org/browse/MDEV-4

[2] 
https://bitbucket.org/zzzeek/alembic/issues/440/cannot-drop-boolean-column-in-mysql

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] how to deprecate a name but still have it as conf.

2017-07-18 Thread Michael Bayer
On Tue, Jul 18, 2017 at 1:02 PM, Doug Hellmann  wrote:

> Option renaming was originally meant as an operatior-facing feature
> to handle renames for values coming from the config file, but not
> as they are used in code.  mtreinish added
> https://review.openstack.org/#/c/357987/ to address this for Tempest,
> so it's possible there's a bug in the logic in oslo.config somewhere
> (or that oslo.db's case is a new one).

OK, patch set 5 at
https://review.openstack.org/#/c/334182/5/oslo_db/options.py shows
what I'm trying to do to make this work, however the test case added
in test_options still fails.   If this is supposed to "just work" then
I hope someone can confirm that.

Alternatively, a simple flag in DeprecatedOpt  "alias_on_conf=True"
would be super easy here so that specific names in our DeprecatedOpt
could be mirrored because we know projects are consuming them on conf.


>
> That said, the options defined by a library are *NOT* part of its
> API, and should never be used by code outside of the library. The
> whole point of isolating options like that is to give operators a
> way to change the way an app uses a library (drivers, credentials,
> etc.) without the app having to know the details.  Ideally the nova
> tests that access oslo.db configuration options directly would
> instead use an API in oslo.db to do the same thing (that API may
> need to be written, if it doesn't already exist).

OK, that is I suppose an option, but clearly a long and arduous one at
this point (add new API to oslo.db, find all projects looking at
conf., submit gerrits, somehow make sure projects never talk to
conf. directly?   how would we ensure that?  shouldn't
oslo.config allow the library that defines the options to plug in its
own "private" namespace so that consuming projects don't make this
mistake?)



>
> At that point, only oslo.db code would refer to the option, and you
> could use the deprecated_name and deprecated_group settings to
> describe the move and change the references to oslo.db within the
> library using a single patch to oslo.db.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.config] how to deprecate a name but still have it as conf.

2017-07-18 Thread Michael Bayer
In oslo.db, I'd like to rename the option "idle_timeout" to
"connection_recycle_time".

Following the pattern of using DeprecatedOpt, we get this:

cfg.IntOpt('connection_recycle_time',
   default=3600,
   deprecated_opts=[cfg.DeprecatedOpt('idle_timeout',
  group="DATABASE"),
cfg.DeprecatedOpt('idle_timeout',
  group="database"),
cfg.DeprecatedOpt('sql_idle_timeout',
  group='DEFAULT'),
cfg.DeprecatedOpt('sql_idle_timeout',
  group='DATABASE'),
cfg.DeprecatedOpt('idle_timeout',
  group='sql')],


However, Nova is accessing "conf.idle_timeout" directly in
nova/db/sqlalcemy/api.py -> _get_db_conf.  Tempest run fails.

Per the oslo.config documentation, the "deprecated_name" flag would
create an alias on the conf. namespace.  However, this has no effect,
even if I remove the other deprecated parameters completely:

cfg.IntOpt('connection_recycle_time',
   default=3600,
   deprecated_name='idle_timeout',

a simple unit test fails to see a value for
conf.connection_recycle_time, including if I add
"deprecated_group='DATABASE'" which is the group that's in this
specific test (however this would not be a solution anyway because
projects use different group names).

From this, it would appear that oslo.config has made it impossible to
deprecate the name of an option because DeprecatedOpt() has no means
of providing the value as an alias on the conf. object.   There's not
even a way I could have projects like nova make a forwards-compatible
change here.

Is this a bug in oslo.config or in oslo.db's usage of oslo.confg?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-06-29 Thread michael mccune

Greetings OpenStack community,

Today's meeting was slightly longer than last week's, with a few more in 
attendance as well. There were no new major issues brought up and the 
main topics were last week's frozen change[4] and a few minor 
fixes[5][6] in the queue now. These fixes were considered trivial and 
have been merged.


# Newly Published Guidelines

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

# Newly published typo fixes

* Fix html_last_updated_fmt for Python3
  https://review.openstack.org/475219

* Fix missing close bracket
  https://review.openstack.org/#/c/478603/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready 
for review)

  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address 
your concerns in an email to the OpenStack developer mailing list[1] 
with the tag "[api]" in the subject. In your email, you should include 
any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.


To learn more about the API WG mission and the work we do, see OpenStack 
API Working Group [2].


Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] 
https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

[4] https://review.openstack.org/#/c/446138/
[5] https://review.openstack.org/#/c/475219/
[6] https://review.openstack.org/#/c/478603/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Weekly meeting time and channel change

2017-06-28 Thread Michael Johnson

As discussed in today's octavia IRC meeting we are changing the meeting time
and IRC channel for the weekly meeting.

Starting next week we will now be meeting at 17:00 UTC on Wednesdays in
channel #openstack-meeting.

This is the same day, just three hours earlier to accommodate team members
in different time zones.

Details can be found here: http://eavesdrop.openstack.org/#Octavia_Meeting

An ICS file is available here:
http://eavesdrop.openstack.org/calendars/octavia-meeting.ics

The original proposal is here:
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118363.html

The "Doodle" we used for voting is here:
https://doodle.com/poll/kxvii2tn9rydp6ed

I hope to see more of you joining us for the octavia meeting!

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-28 Thread Michael Johnson
Hi Yipei,

 

I have meant to add this as a config option, but in the interim you can do the 
following to disable the automatic cleanup by disabling the revert flow in 
taskflow:

 

octavia/common/base_taskflow.py line 37 add “never_resolve=True,” to the engine 
load parameters.

 

Michael

 

From: Yipei Niu [mailto:newy...@gmail.com] 
Sent: Monday, June 26, 2017 11:34 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [octavia] fail to plug vip to amphora

 

Hi, Micheal,

 

Thanks a lot for your help, but I still have one question. 

 

In Octavia, once the controller worker fails plugging VIP to the amphora, the 
amphora is deleted immediately, making it impossible to trace the error. How to 
prevent Octavia from stopping and deleting the amphora? 

 

Best regards,

Yipei 

 

On Mon, Jun 26, 2017 at 11:21 AM, Yipei Niu <newy...@gmail.com 
<mailto:newy...@gmail.com> > wrote:

Hi, all,

 

I am trying to create a load balancer in octavia. The amphora can be booted 
successfully, and can be reached via icmp. However, octavia fails to plug vip 
to the amphora through the amphora client api and returns 500 status code, 
causing some errors as follows.

 

   |__Flow 
'octavia-create-loadbalancer-flow': InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
Traceback (most recent call last):

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py",
 line 53, in _execute_task

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
result = task.execute(**arguments)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 240, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 219, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphora, loadbalancer, amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
137, in post_vip_plug

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
net_info)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
378, in plug_vip

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
return exc.check_exception(r)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/exceptions.py", 
line 32, in check_exception

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
raise responses[status_code]()

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker

 

To fix the problem, I log in the amphora and find that there is one http server 
process is listening on port 9443, so I think the amphora api services is 
active. But do not know how to further investigate what error happens inside 
the amphora api service and solve it? Look forward to your valuable comments.

 

Best regards,

Yipei 

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] fail to plug vip to amphora

2017-06-26 Thread Michael Johnson
Hello Yipei,

 

You are on the track to debug this.

When you are logged into the amphora, please check the following logs to see 
what the amphora-agent error is:

 

/var/log/amphora-agent.log

And

/var/log/syslog

 

One of those two logs will have the error information.

 

Michael

 

 

From: Yipei Niu [mailto:newy...@gmail.com] 
Sent: Sunday, June 25, 2017 8:21 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [octavia] fail to plug vip to amphora

 

Hi, all,

 

I am trying to create a load balancer in octavia. The amphora can be booted 
successfully, and can be reached via icmp. However, octavia fails to plug vip 
to the amphora through the amphora client api and returns 500 status code, 
causing some errors as follows.

 

   |__Flow 
'octavia-create-loadbalancer-flow': InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
Traceback (most recent call last):

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/usr/local/lib/python2.7/dist-packages/taskflow/engines/action_engine/executor.py",
 line 53, in _execute_task

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
result = task.execute(**arguments)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 240, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/controller/worker/tasks/amphora_driver_tasks.py", 
line 219, in execute

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
amphora, loadbalancer, amphorae_network_config)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
137, in post_vip_plug

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
net_info)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File 
"/opt/stack/octavia/octavia/amphorae/drivers/haproxy/rest_api_driver.py", line 
378, in plug_vip

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
return exc.check_exception(r)

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
  File "/opt/stack/octavia/octavia/amphorae/drivers/haproxy/exceptions.py", 
line 32, in check_exception

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
raise responses[status_code]()

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker 
InternalServerError: Internal Server Error

2017-06-21 09:49:35.864 25411 ERROR octavia.controller.worker.controller_worker

 

To fix the problem, I log in the amphora and find that there is one http server 
process is listening on port 9443, so I think the amphora api services is 
active. But do not know how to further investigate what error happens inside 
the amphora api service and solve it? Look forward to your valuable comments.

 

Best regards,

Yipei 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Michael Still
Certainly removing the "--no-binary :all:" results in a build that builds.
I'll test and see if it works todayish.

Michael

On Mon, Jun 12, 2017 at 9:56 PM, Chris Smart <m...@csmart.io> wrote:

> On Mon, 12 Jun 2017, at 21:36, Michael Still wrote:
> > The experimental buildroot based ironic python agent bans all binaries, I
> > am not 100% sure why. Chris is the guy there.
> >
>
> Buildroot ironic python agent forces a build of all the
> ironic-python-agent dependencies (as per requirements and constraints)
> with no-binary :all:,  then builds ironic-python-agent wheel from the
> git clone, then it can just install them all from local compiled wheels
> into the target.[1]
>
> IIRC this was to make sure that the wheels matched the target. It could
> be all done wrong though.
>
> [1]
> https://github.com/csmart/ipa-buildroot/blob/master/
> buildroot-ipa/board/openstack/ipa/post-build.sh#L113
>
> -c
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Michael Still
The experimental buildroot based ironic python agent bans all binaries, I
am not 100% sure why. Chris is the guy there.

I'm using that ipa as neither the coreos or tinyipa versions support the
broadcom nic in this here ibm x3550.

Michael

On 12 Jun 2017 8:56 PM, "Sean Dague" <s...@dague.net> wrote:

> On 06/12/2017 04:29 AM, Michael Still wrote:
> > Hi,
> >
> > I'm trying to explain this behaviour in stable/newton, which specifies
> > Routes==2.3.1 in upper-constraints:
> >
> > $ pip install --no-binary :all: Routes==2.3.1
> > ...
> >   Could not find a version that satisfies the requirement Routes==2.3.1
> > (from versions: 1.5, 1.5.1, 1.5.2, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.7, 1.7.1,
> > 1.7.2, 1.7.3, 1.8, 1.9, 1.9.1, 1.9.2, 1.10, 1.10.1, 1.10.2, 1.10.3,
> > 1.11, 1.12, 1.12.1, 1.12.3, 1.13, 2.0, 2.1, 2.2, 2.3, 2.4.1)
> > Cleaning up...
> > No matching distribution found for Routes==2.3.1
> >
> > There is definitely a 2.3.1 on pip:
> >
> > $ pip install Routes==2.3.1
> > ...
> > Successfully installed Routes-2.3.1 repoze.lru-0.6 six-1.10.0
> >
> > This implies to me that perhaps Routes version 2.3.1 is a binary-only
> > release and that stable/newton is therefore broken for people who don't
> > like binary packages (in my case because they're building an install
> > image for an architecture which doesn't match their host architecture).
> >
> > Am I confused? I'd love to be enlightened.
>
> Routes 2.3.1 appears to be any arch wheel. Is there a specific reason
> that's not going to work for you? (e.g. Routes-2.3.1-py2.py3-none-any.whl)
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Is Routes==2.3.1 a binary only package or something?

2017-06-12 Thread Michael Still
Hi,

I'm trying to explain this behaviour in stable/newton, which specifies
Routes==2.3.1 in upper-constraints:

$ pip install --no-binary :all: Routes==2.3.1
...
  Could not find a version that satisfies the requirement Routes==2.3.1
(from versions: 1.5, 1.5.1, 1.5.2, 1.6, 1.6.1, 1.6.2, 1.6.3, 1.7, 1.7.1,
1.7.2, 1.7.3, 1.8, 1.9, 1.9.1, 1.9.2, 1.10, 1.10.1, 1.10.2, 1.10.3, 1.11,
1.12, 1.12.1, 1.12.3, 1.13, 2.0, 2.1, 2.2, 2.3, 2.4.1)
Cleaning up...
No matching distribution found for Routes==2.3.1

There is definitely a 2.3.1 on pip:

$ pip install Routes==2.3.1
...
Successfully installed Routes-2.3.1 repoze.lru-0.6 six-1.10.0

This implies to me that perhaps Routes version 2.3.1 is a binary-only
release and that stable/newton is therefore broken for people who don't
like binary packages (in my case because they're building an install image
for an architecture which doesn't match their host architecture).

Am I confused? I'd love to be enlightened.

Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Deprecating localfs?

2017-06-07 Thread Michael Still
Greetings from an ancient thread (but the most recent on to openstack-dev
about localfs that I can find).

In 2014, which were heady times, we decided that we couldn't deprecate
nova.virt.disk.vfs.localfs because we theorised that FreeBSD and containers
would soon need it. Oh, and xenapi uses it a little too.

It is now three years later, and there has been some recent thought given
to deprecating localfs again, which I bumped into when I chose this code as
the first candidate for a prototype privsep user inside the nova codebase
[1].

I think we therefore need to have this conversation again. Do we still have
any clear use cases for localfs? Where does guestfs not run at the moment?
Or did I waste my time privsep'ing localfs?

Thanks,
Michael

1: https://review.openstack.org/#/c/459166/





On Wed, Sep 24, 2014 at 7:04 PM, Daniel P. Berrange <berra...@redhat.com>
wrote:

> On Wed, Sep 24, 2014 at 08:26:44AM +1000, Michael Still wrote:
> > On Tue, Sep 23, 2014 at 8:58 PM, Daniel P. Berrange <berra...@redhat.com>
> wrote:
> > > On Tue, Sep 23, 2014 at 02:27:52PM +0400, Roman Bogorodskiy wrote:
> > >>   Michael Still wrote:
> > >>
> > >> > Hi.
> > >> >
> > >> > I know we've been talking about deprecating
> nova.virt.disk.vfs.localfs
> > >> > for a long time, in favour of wanting people to use libguestfs
> > >> > instead. However, I can't immediately find any written documentation
> > >> > for when we said we'd do that thing.
> > >> >
> > >> > Additionally, this came to my attention because Ubuntu 14.04 is
> > >> > apparently shipping a libguestfs old enough to cause us to emit the
> > >> > "falling back to localfs" warning, so I think we need Ubuntu to
> catch
> > >> > up before we can do this thing.
> > >> >
> > >> > So -- how about we remove localfs early in Kilo to give Canonical a
> > >> > release to update libguestfs?
> > >> >
> > >> > Thoughts appreciated,
> > >> > Michael
> > >>
> > >> If at some point we'd start moving into getting FreeBSD supported as a
> > >> host OS for OpenStack, then it would make sense to keep localfs for
> that
> > >> configuration.
> > >>
> > >> libguestfs doesn't work on FreeBSD yet. On the other hand, localfs
> > >> code in Nova doesn't look like it'd be hard to port.
> > >
> > > Yep, that's a good point and in fact applies to Linux too when
> considering
> > > the non-KVM/QEMU drivers libvirt supports. eg if your host does not
> have
> > > virtualization and you're using LXC for container virt, then we need to
> > > have localfs still be present. Likewise if running Xen.
> > >
> > > So we definitely cannot delete or even deprecate it unconditionally. We
> > > simply want to make sure localfs isn't used when Nova is configured to
> > > run QEMU/KVM via libvirt.
> > >
> > > So if we take the config option approach I suggested, then we'd set a
> > > default value for the vfs_impl parameter according to which libvirt
> > > driver you have enabled.
> >
> > I'm glad we've had this thread, because I hadn't thought of the
> > FreeBSD case at all. In that case I wonder if we want to water down
> > the warning we currently log in this case:
> >
> > LOG.warn(_LW("Unable to import guestfs"
> >  "falling back to VFSLocalFS"))
> >
> > If feel like it should be an info if we know some platforms will
> > always have this occur. I know this is a minor thing, but this came to
> > my attention because at lease one operator was concerned by seeing
> > that warning in their logs.
>
> If we take my suggested approach of using a fixed impl based on libvirt
> driver type, then we wouldn't have fallback & so not see this warning.
> Even when we do have fallback, we should only warn if libguestfs is
> installed, but not working.
>
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-31 Thread Michael Johnson
Hi Alex,

 

As you know I am a strong proponent of moving the docs into the project team 
repositories [1].

 

Personally I am in favor of pulling the Band-Aids off and doing option 1.  I 
think centralizing the documentation under one tree and consolidating the build 
into one job has benefits.  I can’t speak to the complexities of the 
documentation template(s?) and the sphinx configuration issues that might arise 
from this plan, but from a PTL/developer/doc writer I like the concept.  I 
fully understand this means work for us to move our API-REF, etc. but I think 
it is worth it.

 

As a secondary vote I am also ok with option 2.  I just think we might as well 
do a full consolidation.

 

I am not a fan of requiring project teams to setup separate repos for the docs, 
there is value to having them in tree for me.  So, I would vote against 3.

 

Michael

 

[1] https://review.openstack.org/#/c/439122/

 

From: Alexandra Settle [mailto:a.set...@outlook.com] 
Sent: Monday, May 22, 2017 2:39 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Cc: 'openstack-d...@lists.openstack.org' <openstack-d...@lists.openstack.org>
Subject: [openstack-dev] [doc][ptls][all] Documentation publishing future

 

Hi everyone,

 

The documentation team are rapidly losing key contributors and core reviewers. 
We are not alone, this is happening across the board. It is making things 
harder, but not impossible.

Since our inception in 2010, we’ve been climbing higher and higher trying to 
achieve the best documentation we could, and uphold our high standards. This is 
something to be incredibly proud of.

 

However, we now need to take a step back and realise that the amount of work we 
are attempting to maintain is now out of reach for the team size that we have. 
At the moment we have 13 cores, of whom none are full time contributors or 
reviewers. This includes myself.

 

Until this point, the documentation team has owned several manuals that include 
content related to multiple projects, including an installation guide, admin 
guide, configuration guide, networking guide, and security guide. Because the 
team no longer has the resources to own that content, we want to invert the 
relationship between the doc team and project teams, so that we become liaisons 
to help with maintenance instead of asking for project teams to provide 
liaisons to help with content. As a part of that change, we plan to move the 
existing content out of the central manuals repository, into repositories owned 
by the appropriate project teams. Project teams will then own the content and 
the documentation team will assist by managing the build tools, helping with 
writing guidelines and style, but not writing the bulk of the text.

 

We currently have the infrastructure set up to empower project teams to manage 
their own documentation in their own tree, and many do. As part of this change, 
the rest of the existing content from the install guide and admin guide will 
also move into project-owned repositories. We have a few options for how to 
implement the move, and that's where we need feedback now.

 

1. We could combine all of the documentation builds, so that each project has a 
single doc/source directory that includes developer, contributor, and user 
documentation. This option would reduce the number of build jobs we have to 
run, and cut down on the number of separate sphinx configurations in each 
repository. It would completely change the way we publish the results, though, 
and we would need to set up redirects from all of the existing locations to the 
new locations and move all of the existing documentation under the new 
structure.

 

2. We could retain the existing trees for developer and API docs, and add a new 
one for "user" documentation. The installation guide, configuration guide, and 
admin guide would move here for all projects. Neutron's user documentation 
would include the current networking guide as well. This option would add 1 new 
build to each repository, but would allow us to easily roll out the change with 
less disruption in the way the site is organized and published, so there would 
be less work in the short term.

 

3. We could do option 2, but use a separate repository for the new 
user-oriented documentation. This would allow project teams to delegate 
management of the documentation to a separate review project-sub-team, but 
would complicate the process of landing code and documentation updates together 
so that the docs are always up to date.

 

Personally, I think option 2 or 3 are more realistic, for now. It does mean 
that an extra build would have to be maintained, but it retains that key 
differentiator between what is user and developer documentation and involves 
fewer changes to existing published contents and build jobs. I definitely think 
option 1 is feasible, and would be happy to make it work if 

[openstack-dev] [all][api] POST /api-wg/news

2017-05-25 Thread michael mccune

Greetings OpenStack community,

Today was a relatively short meeting with most the time being devoted to 
a discussion of Monty Taylor's document chain regarding using the 
service catalog for version discovery[4]. The group was largely in 
agreement that work is proceeding well and with a few more minor tweaks 
should be ready for freeze.


We also discussed a plan[5] to create a new mailing list targetted at 
users, developers, and operators who consume the OpenStack APIs and have 
a higher degree of interest in helping to shape them from an SDK 
perspective. This was welcomed as a "good thing"(TM), and everyone seemd 
to agree that having a space for people to discuss SDK and API related 
issues specifically is a nice step forward.


Finally, we had a small side-track discussing Sean Dague's progress[6] 
on the global request ID implementation efforts. This seems like great 
work that will help improve the state of tracking and monitoring within 
OpenStack.


# Newly Published Guidelines

Nothing new at this time.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None at this time but please check out the reviews below.

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* A suite of several documents about using the service catalog and doing 
version discovery

  Start at https://review.openstack.org/#/c/462814/

* WIP: microversion architecture archival doc (very early; not yet ready 
for review)

  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address 
your concerns in an email to the OpenStack developer mailing list[1] 
with the tag "[api]" in the subject. In your email, you should include 
any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.


To learn more about the API WG mission and the work we do, see OpenStack 
API Working Group [2].


Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] 
https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

[4] Start at https://review.openstack.org/#/c/462814/
[5] https://review.openstack.org/#/c/468046/
[6] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117367.html


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Supporting volume_type when booting from volume

2017-05-23 Thread Michael Glasgow

On 5/23/2017 4:43 PM, Dean Troyer wrote:

In this particular case it may not be necessary, but I think early
implementation of composite features in clients is actually the right
way to prove the utility of these things going forward.  Establish and
document the process, implement in a way for users to opt-in, and move
into the services as they are proven useful.


A slight disadvantage of this approach is that the resulting 
incongruence between the client and the API is obfuscating.  When an end 
user can make accurate inferences about the API based on how the client 
works, that's a form of transparency that can pay dividends.


Also in terms of the "slippery slope" that has been raised, putting 
small bits of orchestration into the client creates a grey area there as 
well:  how much is too much?


OTOH I don't disagree with you.  This approach might be the best of 
several not-so-great options, but I wish I could think of a better one.


--
Michael Glasgow

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Would be possible to modify metadata when instance is in rescued state?

2017-05-09 Thread Michael Still
This sort of question comes up every six months or so it seems.

The issue is that for config drive users we don't have a way of rebuilding
all of the config drive (for example, the root password is gone). That's
probably an issue for rescue because its presumably one of the things you
might reset.

I'm not opposed to exploring options, but I think we need someone to come
up with a proposal which addresses previous concerns. I'd recommend a quick
search of the mailing list archives for previous discussions.

Hope this helps,
Michael




On Tue, May 9, 2017 at 5:04 PM, Pawel Suder <pawel.su...@corp.ovh.com>
wrote:

> Hello,
>
>
> I would like to raise a topic regarding possibilities when metadata could
> be modified on instance.
>
>
> We noticed that instance metadata could be modified only when vm_state is
> set to following values:
>
>
>- active
>- paused
>- suspended
>- stopped
>
> Found:
>
> https://github.com/openstack/nova/blob/master/nova/compute/
> api.py#L3916-L3920
> https://github.com/openstack/nova/blob/master/nova/compute/
> api.py#L3905-L3908
>
> From time to time it is needed to have instance in rescued state.
>
> Scenario: VM is rescued and special metadata attributes need to be set to
> allow cloud-init to act specifically for rescue mode. Metadata data should
> be available only during rescue mode.
>
> Question: what kind of impact could be observed when checks for instance
> state will be modified for methods:
>
> update_instance_metadata
> delete_instance_metadata
>
> Thank you,
>
> Cheers,
> Pawel
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn] metadata agent implementation

2017-05-07 Thread Michael Still
It would be interesting for this to be built in a way where other endpoints
could be added to the list that have extra headers added to them.

For example, we could end up with something quite similar to EC2 IAMS if we
could add headers on the way through for requests to OpenStack endpoints.

Do you think the design your proposing will be extensible like that?

Thanks,
Michael




On Fri, May 5, 2017 at 10:07 PM, Daniel Alvarez Sanchez <dalva...@redhat.com
> wrote:

> Hi folks,
>
> Now that it looks like the metadata proposal is more refined [0], I'd like
> to get some feedback from you on the driver implementation.
>
> The ovn-metadata-agent in networking-ovn will be responsible for
> creating the namespaces, spawning haproxies and so on. But also,
> it must implement most of the "old" neutron-metadata-agent functionality
> which listens on a UNIX socket and receives requests from haproxy,
> adds some headers and forwards them to Nova. This means that we can
> import/reuse big part of neutron code.
>
> I wonder what you guys think about depending on neutron tree for the
> agent implementation despite we can benefit from a lot of code reuse.
> On the other hand, if we want to get rid of this dependency, we could
> probably write the agent "from scratch" in C (what about having C
> code in the networking-ovn repo?) and, at the same time, it should
> buy us a performance boost (probably not very noticeable since it'll
> respond to requests from local VMs involving a few lookups and
> processing simple HTTP requests; talking to nova would take most
> of the time and this only happens at boot time).
>
> I would probably aim for a Python implementation reusing/importing
> code from neutron tree but I'm not sure how we want to deal with
> changes in neutron codebase (we're actually importing code now).
> Looking forward to reading your thoughts :)
>
> Thanks,
> Daniel
>
> [0] https://review.openstack.org/#/c/452811/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Michael Glasgow

On 5/4/2017 10:08 AM, Alex Schultz wrote:

On Thu, May 4, 2017 at 5:32 AM, Chris Dent <cdent...@anticdent.org> wrote:

I know you're not speaking as the voice of your employer when making
this message, so this is not directed at you, but from what I can
tell Oracle's presense upstream (both reviews and commits) in Ocata
and thus far in Pike has not been huge.


Probably because they are still on Kilo.


I don't want to stray off topic, but it seems worth clarifying that 
Oracle OpenStack for Oracle Linux 3.0 is based on Mitaka.


http://www.oracle.com/technetwork/server-storage/openstack/linux/downloads/index.html

Our messaging tends to get confused because we (Oracle) publish separate 
OpenStack distributions for Solaris and Linux.  They are distinct 
products built and supported by different teams on their own schedules. 
It would be less confusing to the community and to customers if those 
efforts were coordinated or even consolidated, but so far that has not 
been possible.


--
Michael Glasgow

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [LBaaS][octavia] Weekly IRC meeting cancelled for three weeks

2017-05-03 Thread Michael Johnson

Hi Octavia team,

At today's weekly LBaaS/Octavia IRC meeting we decided to cancel the next
three meetings due to the OpenStack summit, other conflicts, and vacations.
We just won't have quorum these weeks.

Safe travels for those attending the OpenStack summit and we will meet again
5/31/17.

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack "R" Release Naming Preliminary Results

2017-04-21 Thread Michael Johnson
Hmm, I never received an email to vote for the name, just for the TC election.

Michael


-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: Friday, April 21, 2017 5:12 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>; Openstack Users 
<openst...@lists.openstack.org>
Subject: [openstack-dev] OpenStack "R" Release Naming Preliminary Results

Hello all!

We left the voting open for an additional week because of how long it took to 
get emails sent out and the subsequent email issues. We'll be looking at 
options for next time to make that better.

The raw results are below - however...

**PLEASE REMEMBER** that these now have to go through legal vetting. So it is 
too soon to say "All Hail OpenStack Radium" - the last several naming polls 
have produced issues with the top choice. So as exciting as a potentially 
Radioactive OpenStack release might be, we might also get a release that's 
obsessed with Interop and Consistent Logging, or one married to my cousin. It's 
_possible_ we even make it down to having a release that plays on the American 
Football Team of the University of Arkansas. There are so many great 
possibilities!

In any case, the names have been sent off to legal for vetting. As soon as we 
have a final winner, I'll let you all know.

http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_e53f789ff7acc996

Result

1. Radium  (Condorcet winner: wins contests with all other choices) 2. Rocky  
loses to Radium by 807–556 3. Rex  loses to Radium by 807–532, loses to Rocky 
by 638–615 4. Razorback  loses to Radium by 796–516, loses to Rex by 630–626 5. 
Rock  loses to Radium by 841–464, loses to Razorback by 660–569 6. Root  loses 
to Radium by 866–442, loses to Rock by 581–527 7. Raspberry  loses to Radium by 
891–381, loses to Root by 579–536 8. Ray  loses to Radium by 906–355, loses to 
Raspberry by 553–539 9. Rambler  loses to Radium by 916–336, loses to Ray by 
542–525 10. Railspur  loses to Radium by 904–316, loses to Rambler by 507–468 
11. Rampart  loses to Radium by 926–301, loses to Railspur by 475–467 12. 
Richmond  loses to Radium by 929–306, loses to Rampart by 506–471 13. Rockies  
loses to Radium by 926–293, loses to Richmond by 475–462 14. Rosswood  loses to 
Radium by 938–290, loses to Rockies by 463–449 15. Rebecca  loses to Radium by 
911–350, loses to Rosswood by 484–481 16. Rupert  loses to Radium by 945–283, 
loses to Rebecca by 514–447 17. Revelstoke  loses to Radium by 949–269, loses 
to Rupert by 472–425 18. Robson  loses to Radium by 977–250, loses to 
Revelstoke by 439–423 19. Roderick  loses to Radium by 961–243, loses to Robson 
by 432–409 20. Rossland  loses to Radium by 966–241, loses to Roderick by 
401–394 21. Rambles  loses to Radium by 977–222, loses to Rossland by 421–381 
22. Raush  loses to Radium by 986–193, loses to Rambles by 405–379 23. Renfrew  
loses to Radium by 976–213, loses to Raush by 377–363

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Taskflow] Current state or the project ?

2017-04-20 Thread Michael Johnson
Hi Robin,

 

The Octavia project (shameless plug: 
https://docs.openstack.org/developer/octavia/) relies on TaskFlow for the core 
workflow.  For us, the TaskFlow project is very stable.

 

Michael

 

From: Robin De-Lillo [mailto:rdeli...@rodeofx.com] 
Sent: Wednesday, April 19, 2017 11:14 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Taskflow] Current state or the project ?

 

Hello Guys,

 

I'm Robin a Software developer for a VFX company based in Canada. As the 
company grow up, we are currently looking into redesigning our internal 
processes and workflows in a more nodal/graph based approach.

 

Ideally we would like to start from an existing library so we don't 
re-implement things from scratch. We found out TaskFlow which, after a couple 
of tests, looks very promising to us. Good work with that !!

 

We were wondering what is the current state of this project ? Is that still 
something under active development or a priority for OpenStack ? As we would 
definitely be happy to contribute to this library in the future, we are just 
gathering information around for now to ensure we pick up the best solution 
which suit our needs.

 

Thanks a lot,

Robin De Lillo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]

2017-04-20 Thread Michael Turek

Hey Don,

Deployment to Power8 and beyond via the agent-ipmitool driver should 
work fine. We test it regularly here:


https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI

If you'd like some help setting up Ironic for power, feel free to ping 
me on #openstack-ironic. Also check out this blog that I worked on with 
a colleague for some advice:


https://developer.ibm.com/linuxonpower/2017/03/20/setting-openstack-bare-metal-service-power8/

Also a friendly reminder that this question is probably more suited for 
the OpenStack mailing list rather than openstack-dev.


Thanks,
mjturek

On 04/20/2017 08:16 AM, Don maillist wrote:
Does Ironic currently support non X86 systems? I have a power PC ATCA 
blade that I need to be used as a very specific blade function. I 
would need to PXE boot the blade to a ramdisk if possible (There are 
no hard drives. Only a flash drive).


Best Regards,
Don


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [third-party-ci] pkvmci ironic job breakage details

2017-04-18 Thread Michael Turek

On 04/18/2017 07:56 AM, Vladyslav Drok wrote:

Hey Michael,

On Fri, Apr 14, 2017 at 6:51 PM, Michael Turek 
<mjtu...@linux.vnet.ibm.com <mailto:mjtu...@linux.vnet.ibm.com>> wrote:


Hey ironic-ers,

So our third party CI job for ironic has been, and remains,
broken. I was able to do some investigation today and here's a
summary of what we're seeing. I'm hoping someone might know the
root of the problem.

For reference, please see this paste and the logs of the job that
I was working in:
http://paste.openstack.org/show/606564/
<http://paste.openstack.org/show/606564/>

https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/

<https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/>

I've redacted the credentials in the ironic node-show for obvious
reasons but rest assured they are properly set. These commands are
run while
'/opt/stack/new/ironic/devstack/lib/ironic:wait_for_nova_resources'
is looping.

Basically, the ironic hypervisor for the node doesn't appear. As
well, none of the node's properties make it to the hypervisor stats.

Some more strangeness is that the 'count' value from the
'openstack hypervisor stats show'. Though no hypervisors appear,
the count is still 1. Since the run was broken, I decided to
delete node-0 (about 3-5 minutes before the run failed) and see if
it updated the count. It did.

Does anyone have any clue what might be happening here? Any advice
would be appreciated!


So the failure seems to be here -- 
https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/screen-ir-api.txt.gz, 
API and conductor are not able to communicate via RPC for some reason. 
Need to investigate this more. Do you mind filing a bug about this?



Thanks,
mjturek


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks vdrok,

Bug is opened here - https://bugs.launchpad.net/ironic/+bug/1683902
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][ironic][octavia] oslo.config 4.0 will break projects' unit test

2017-04-17 Thread Michael Johnson
Thank you ChangBo, I have resolved the issues in octavia in this patch: 
https://review.openstack.org/457356 up for review.

 

Michael

 

From: ChangBo Guo [mailto:glongw...@gmail.com] 
Sent: Sunday, April 16, 2017 12:32 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [glance][ironic][octavia] oslo.config 4.0 will 
break projects' unit test

 

As I expect, there are some failures in periodic tasks recently [1] if we set 
enforce_type with True by default,  we'd better fix them before we release 
oslo.config 4.0.  Some guys had been working on this :
Nova: https://review.openstack.org/455534 should fix failures

tempest:  https://review.openstack.org/456445 fixed

Keystone:  https://review.openstack.org/455391 wait for oslo.config 4.0

 

We still need help from Glance/Ironic/Octavia

Glance:  https://review.openstack.org/#/c/455522/ need review

Ironic:  Need fix failure in 
http://logs.openstack.org/periodic/periodic-ironic-py27-with-oslo-master/680abfe/testr_results.html.gz
Octavia: Need fix failure in 
http://logs.openstack.org/periodic/periodic-octavia-py35-with-oslo-master/80fee03/testr_results.html.gz


[1] http://status.openstack.org/openstack-health/#/?groupKey=build_name 
<http://status.openstack.org/openstack-health/#/?groupKey=build_name=hour=-with-oslo>
 =hour=-with-oslo

 

2017-04-04 0:01 GMT+08:00 ChangBo Guo <glongw...@gmail.com 
<mailto:glongw...@gmail.com> >:

Hi ALL,


oslo_config provides method CONF.set_override[1] , developers usually use it to 
change config option's value in tests. That's convenient . By default  
parameter enforce_type=False,  it doesn't check any type or value of override. 
If set enforce_type=True , will check parameter override's type and value.  In 
production code(running time code),  oslo_config  always checks  config 
option's value. In short, we test and run code in different ways. so there's  
gap:  config option with wrong type or invalid value can pass tests when

parameter enforce_type = False in consuming projects.  that means some invalid 
or wrong tests are in our code base. 


We began to warn user about the change since Sep, 2016 in [2]. This change will 
notify consuming project to write correct test cases with config options. 

We would make enforce_type = true by default in [3], that may break some 
projects' tests, that's also raise wrong unit tests. The failure is easy to 
fix, which

is recommended. 



[1] 
https://github.com/openstack/oslo.config/blob/efb287a94645b15b634e8c344352696ff85c219f/oslo_config/cfg.py#L2613
[2] https://review.openstack.org/#/c/365476/
[3] https://review.openstack.org/328692



-- 

ChangBo Guo(gcb)




-- 

ChangBo Guo(gcb)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [third-party-ci] pkvmci ironic job breakage details

2017-04-17 Thread Michael Turek

On 04/17/2017 02:25 PM, Matt Riedemann wrote:


On 4/14/2017 10:51 AM, Michael Turek wrote:

Hey ironic-ers,

So our third party CI job for ironic has been, and remains, broken. I
was able to do some investigation today and here's a summary of what
we're seeing. I'm hoping someone might know the root of the problem.

For reference, please see this paste and the logs of the job that I was
working in:
http://paste.openstack.org/show/606564/
https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/ 




I've redacted the credentials in the ironic node-show for obvious
reasons but rest assured they are properly set. These commands are run
while
'/opt/stack/new/ironic/devstack/lib/ironic:wait_for_nova_resources' is
looping.

Basically, the ironic hypervisor for the node doesn't appear. As well,
none of the node's properties make it to the hypervisor stats.

Some more strangeness is that the 'count' value from the 'openstack
hypervisor stats show'. Though no hypervisors appear, the count is still
1. Since the run was broken, I decided to delete node-0 (about 3-5
minutes before the run failed) and see if it updated the count. It did.

Does anyone have any clue what might be happening here? Any advice would
be appreciated!

Thanks,
mjturek


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


See:

http://lists.openstack.org/pipermail/openstack-dev/2017-April/115486.html


Thanks Matt,

Unfortunately doesn't seem to be the fix.

I did a quick test run of the job and ran "nova-manage cell_v2 
discover_hosts --verbose" manually while ironic:wait_for_nova_resources 
was looping (where we eventually fail). This fixes the issue of the 
hypervisor not appearing, but the resources associated with the 
hypervisor (vcpus, memory_mb, etc) remain 0.


mjturek


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [third-party-ci] pkvmci ironic job breakage details

2017-04-14 Thread Michael Turek

Hey ironic-ers,

So our third party CI job for ironic has been, and remains, broken. I 
was able to do some investigation today and here's a summary of what 
we're seeing. I'm hoping someone might know the root of the problem.


For reference, please see this paste and the logs of the job that I was 
working in:

http://paste.openstack.org/show/606564/
https://dal05.objectstorage.softlayer.net/v1/AUTH_3d8e6ecb-f597-448c-8ec2-164e9f710dd6/pkvmci/ironic/25/454625/10/check-ironic/tempest-dsvm-ironic-agent_ipmitool/0520958/

I've redacted the credentials in the ironic node-show for obvious 
reasons but rest assured they are properly set. These commands are run 
while 
'/opt/stack/new/ironic/devstack/lib/ironic:wait_for_nova_resources' is 
looping.


Basically, the ironic hypervisor for the node doesn't appear. As well, 
none of the node's properties make it to the hypervisor stats.


Some more strangeness is that the 'count' value from the 'openstack 
hypervisor stats show'. Though no hypervisors appear, the count is still 
1. Since the run was broken, I decided to delete node-0 (about 3-5 
minutes before the run failed) and see if it updated the count. It did.


Does anyone have any clue what might be happening here? Any advice would 
be appreciated!


Thanks,
mjturek


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-04-13 Thread michael mccune

Greetings OpenStack community,

This week's meeting was lightly attended but provided a useful 
discussion about the future of the working group and how we will 
continue to improve the API experience for all OpenStack users. The 
group is considering its role with respect to the guidelines that it 
creates and how to not only increase our membership but also explore 
other options for improving the overall state of API usability and 
consistency within OpenStack. Although no firm actions have resulted 
from this discussion, we have agreed to keep the topic open, have more 
face to face conversations about it at the upcoming forum event, and to 
reach out to other working groups with the intent to learn more about 
community expectations and usages with regards to the APIs in OpenStack.


# Newly Published Guidelines

Nothing new this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Define pagination guidelines
  https://review.openstack.org/#/c/446716/

* Create a set of api interoperability guidelines
  https://review.openstack.org/#/c/421846/

* Recommend the correct HTTP method for tags
  https://review.openstack.org/451536

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* Mention max length limit information for tags
  https://review.openstack.org/#/c/447344/

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/
  On hold.

* WIP: microversion architecture archival doc (very early; not yet ready 
for review)

  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address 
your concerns in an email to the OpenStack developer mailing list[1] 
with the tag "[api]" in the subject. In your email, you should include 
any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.


To learn more about the API WG mission and the work we do, see OpenStack 
API Working Group [2].


Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] 
https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] How to update cert in the secret

2017-04-04 Thread Michael Johnson
Hi Andrey,

 

As we discussed on IRC, the listeners in LBaaS v2 allow you to update the 
barbican container IDs.  This will start the certificate update process on the 
load balancers with the new content from barbican.

 

The neutron client, as you noted, does not appear to have this capability, but 
the API supports this as the primary means to update certificate content for 
LBaaS.  This will be included in the octavia OpenStack client.

 

Michael

 

From: Andrey Grebennikov [mailto:agrebenni...@mirantis.com] 
Sent: Monday, April 3, 2017 12:14 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [barbican] How to update cert in the secret

 

Hey Barbican folks, I have a question regarding the functionality of the 
secrets containers please.

 

If I got my secret created is there a way to update it down the road with 
another cert?

The usecase is pretty common - using barbican with neutron lbaas.

When the load balance from the lbaas backend gets the cert from barbican there 
is no way to update the neutron load balancer with the new secret seems so.

The only way to update the cert within the balancer is to update the barbican 
secret and trigger the balancer to re-request the cert (while adding the pool 
member for example).

 

Any help is greatly appreciated!

 

-- 

Andrey Grebennikov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project Navigator Updates - Feedback Request

2017-03-27 Thread Michael Johnson
I have a few comments on the updated Project Navigator.

 

1.  I hope this is mostly automated at this point?  The current content for 
Project Navigator is very out of date (Mitaka?) and folks have asked why 
projects are not listed there.
2.  What is the policy around the tags?  For octavia I see that standard 
deprecation isn’t listed there even though our neutron-lbaas repository does 
have the tag.  Granted, I need to update the octavia repository to also have 
the tag, but with projects that have multiple sub-projects, how is this listing 
determined?
3.  How is the project age determined?  I see that octavia shows one year, 
but it has been an active project since 2014.  2012 if you count neutron-lbaas 
(now part of octavia).  This could be confusing for folks that have attended 
summit sessions in the past or downloaded the packages previously.
4.  API version history is another item I am curious to understand how it 
is calculated.  It seems confusing with actual project API 
versions/microversions when it links to the releases page.  API version history 
is not a one-to-one relationship with project releases.
5.  The “About this project” seems to come from the developer 
documentation.  Is this something the PTL can update?
6.  Is there a way to highlight that a blank adoption is because a project 
was not included in the survey?  This can also be deceiving and lead someone to 
think that a project is unused.  (Looking at page 54 of the April survey from 
2016 I expect load balancing is widely used)
7.  Finally, from reading my above questions/comments, it would be nice to 
have a “PTL guide to project navigator”.

 

Thank you for updating this, folks have asked us why octavia was not listed.

 

Michael

 

 

From: Lauren Sell [mailto:lau...@openstack.org] 
Sent: Friday, March 24, 2017 9:58 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] Project Navigator Updates - Feedback Request

 

Hi everyone,

 

We’ve been talking for some time about updating the project navigator, and we 
have a draft ready to share for community feedback before we launch and 
publicize it. One of the big goals coming out of the joint TC/UC/Board meeting 
a few weeks ago[1] was to help better communicate ‘what is openstack?’ and this 
is one step in that direction.

A few goals in mind for the redesign:
- Represent all official, user-facing projects and deployment services in the 
navigator
- Better categorize the projects by function in a way that makes sense to 
prospective users (this may evolve over time as we work on mapping the 
OpenStack landscape)
- Help users understand which projects are mature and stable vs emerging
- Highlight popular project sets and sample configurations based on different 
use cases to help users get started

For a bit of context, we’re working to give each OpenStack official project a 
stronger platform as we think of OpenStack as a framework of composable 
infrastructure services that can be used individually or together as a powerful 
system. This includes the project mascots (so we in effect have logos to 
promote each component separately), updates to the project navigator, and 
bringing back the “project updates” track at the Summit to give each PTL/core 
team a chance to provide an update on their project roadmap (to be recorded and 
promoted in the project navigator among other places!). 

We want your feedback on the project navigator v2 before it launches. Please 
take a look at the current version on the staging site and provide feedback on 
this thread.

http://devbranch.openstack.org/software/project-navigator/

Please review the overall concept and the data and description for your project 
specifically. The data is primarily pulled from TC tags[2] and Ops tags[3]. 
You’ll notice some projects have more information available than others for 
various reasons. That’s one reason we decided to downplay the maturity metric 
for now and the data on some pages is hidden. If you think your project is 
missing data, please check out the repositories and submit changes or again 
respond to this thread.

Also know this will continue to evolve and we are open to feedback. As I 
mentioned, a team that formed at the joint strategy session a few weeks ago is 
tackling how we map OpenStack projects, which may be reflected in the 
categories. And I suspect we’ll continue to build out additional tags and 
better data sources to be incorporated.

Thanks for your feedback and help.

Best,
Lauren

[1] 
http://superuser.openstack.org/articles/community-leadership-charts-course-openstack/
[2] https://governance.openstack.org/tc/reference/tags/
[3] https://wiki.openstack.org/wiki/Operations/Tags

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.opensta

Re: [openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

2017-03-20 Thread Michael Johnson
Hi Saverio,

First, please note, in the future the best tag for load balancing is
[octavia] as it is no longer part of the neutron project.

I am sorry that you are so anxious and confused about the current state of
load balancing for OpenStack.  Let me clarify a few things:

1. LBaaSv2 is not going away and is not deprecated.  The neutron-lbaas code
base is going into deprecation in favor of the octavia code base.  I will
highlight two things, among others, we are doing to ease this transition for
operators:
a. For some time into the future you will be able to continue to use LBaaSv2
via neutron using the proxy driver in neutron-lbaas.
b. There will be migration procedures and scripts that will move, in place,
load balancers from neutron-lbaas into octavia.
2. Deprecation means we will not continue to develop features for
neutron-lbaas, but it will remain in the code base for at least two more
releases and continue to receive bug fixes.  It's a formal way of saying,
hey, in the future we are going to remove this.
3. New features will be added to the octavia code base.  It is only
neutron-lbaas that will be going into feature freeze for new feature
development due to the transition.
4. Any tools written against the neutron endpoint for neutron-lbaas using
the LBaaSv2 API will work with Octavia by updating the endpoint you are
pointing to from neutron to octavia.
5. We are not making any changes to stable/liberty, stable/mitaka,
stable/newton, or stable/ocata releases.  I will note, per OpenStack stable
release policy, liberty is EOL and mitaka will be next month and we are not
allowed to add new features to any previous releases.   Please see the
OpenStack stable policy here:
https://docs.openstack.org/project-team-guide/stable-branches.html
6. Octavia was available and, in fact, the reference load balancing driver
in Liberty.
7. Multiple operators are represented on the core review team for octavia.
We try really hard to listen to feedback we get and to do what is best for
folks using load balancing in OpenStack.  It is unfortunate our
presentations at the Barcelona summit were denied and we did not get an
opportunity to share our plan with the community and get feedback.  If you
have concerns I encourage you to reach out to us via our weekly IRC
meetings, our channel on IRC #openstack-lbaas, or via the mailing list with
the [octavia] tag.  As you know, I have been responding to your emails with
load balancing questions.

To answer Zhi's e-mail:

This is correct, if you are using the legacy haproxy namespace driver, and
not the octavia driver, there is currently no easy method to migrate the
ownership of a load balancer from one agent to another.
The legacy haproxy namespace driver is/was not intended for high
availability.  If you want a highly available open source load balancing
option, I highly recommend you use the octavia driver instead of the haproxy
namespace driver.  It was designed to provide scale and availability.  You
would not have the issue you are describing with the octavia driver.

That said, if you want to continue to develop new features for the haproxy
namespace driver, we should start planning to do so in the octavia code
base.
We will be starting work on a port of the haproxy namespace driver into
octavia soon.  We are however discussing what the future should be for this
driver given its limitations.  I think the best plan will be to port it over
into a standalone driver that folks can contribute to if they have a need
for it and we can deprecate it if there is no longer support for it.

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Friday, March 17, 2017 4:55 AM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

Hello there,

I am just back from the Ops Midcycle where Heidi Joy Tretheway reported some
data from the user survey.

So if we look at deployments with more than 100 servers NO ONE IS USING
NEWTON yet. And I scream this loud. Everyone is still in Liberty or Mitaka.

I am just struggling to upgrade to LBaaSv2 to hear that is already going
into deprecation. The feature Zhi is proposing is important also for me once
I go to production.

I would encourage devs to listen more to operators feedback. Also you devs
cant just ignore that users are running still Liberty/Mitaka so you need to
change something in this way of working or all the users will run away.

thank you

Saverio


On 16/03/17 16:26, Kosnik, Lubosz wrote:
> Hello Zhi,
> Just one small information. Yesterday on Octavia weekly meeting we 
> decided that we're gonna add new features to LBaaSv2 till Pike-1 so 
> the windows is very small.
> This decision was made as LBaaSv2 is currently Octavia delivery, not 
> Neutron anymore and this project is going into deprecations stage.
> 
> Cheers,
> Lubosz
> 

Re: [openstack-dev] [lbaas][neutron] Is possible the lbaas commands are missing completely from openstack cli ?

2017-03-17 Thread Michael Johnson
Yes, as previously announced, we deferred development of the OpenStack
Client (OSC) until Pike.
Work has started on the OSC plugin for Octavia and we expect it to be
available in Pike.

Neutron CLI is deprecated which means it will go away in the future, but is
still available for use and is still available on the stable branches for
previous releases.

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Friday, March 17, 2017 6:21 AM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [lbaas][neutron] Is possible the lbaas commands are
missing completely from openstack cli ?

Client version: openstack 3.9.0

I cant find any lbaas commands. I have to use the 'neutron' client.

Everycommand I get:
neutron CLI is deprecated and will be removed in the future. Use openstack
CLI instead.

Is LBaaS even going to be implemented in the unified openstack client ?

thank you

Saverio

--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15,
direct +41 44 268 1573 saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-13 Thread Michael Johnson
I can confirm that the 1.0.0 release of neutron-lbaas-dashboard is working
on stable/newton.
I included my installation steps in the below linked bug.

As mentioned in the first e-mail.  The instructions say to only install one
of the two files in the enabled directory.  I suspect that is the issue you
are seeing.

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Monday, March 13, 2017 5:47 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from
LBaaS v1 to v2

On 10/03/17 17:49, Michael Johnson wrote:
> Yes, folks have recently deployed the dashboard with success.  I think 
> you had that discussion on the IRC channel, so I won't repeat it here.
> 
> Please note, the neutron-lbaas-dashboard does not support LBaaS v1, 
> you must have LBaaS v2 deployed for the neutron-lbaas-dashboard to 
> work.  If you are trying to use LBaaS v1, you can use the legacy 
> panels included in the older versions of horizon.
> 
[..CUT..]
> If you think there is an open bug for the dashboard, please report it 
> in https://bugs.launchpad.net/neutron-lbaas-dashboard



Hello,
I updated the bug
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1621403

Can anyone clarify the version matrix to use between the horizon version and
the neutron-lbaas-dashboard panels versions ?

can anyone confirm that both files
_1481_project_ng_loadbalancersv2_panel.py and file
_1480_project_loadbalancersv2_panel.py need to be installed ?

Is it okay to use branch master of neutron-lbaas-dashboard with horizon
stable/newton ?

thank you

Saverio
















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-13 Thread Michael Johnson
Hi Saverio,

I did a fresh install today with master versions of both OpenStack and the
neutron-lbaas-dashboard just to make sure the panels are working as
expected.  It went fine.

https://usercontent.irccloud-cdn.com/file/4Zgl9SB3/

To answer your version question:
Stable/mitaka neutron-lbaas-dashboard should work with stable/mitaka and
stable/newton OpenStack
Stable/ocata neutron-lbaas-dashboard works with stable/ocata OpenStack

Per the instructions in the README.rst and on PyPi, ONLY install the
_1481_project_ng_loadbalancersv2_panel.py file.  Do not install both, it
will fail.

I am not sure if you can use the master branch of neutron-lbaas-dashboard
with a newton version of horizon.  This is not a combination we test and/or
support.  It may work.
Someone from the horizon team may have more insights on that, but I think
the best answer is to get it going with the known good combinations and then
to test mixed releases.

I will now start over on stable/newton and test it out.  I will let you know
if I find a problem.

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Monday, March 13, 2017 5:47 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from
LBaaS v1 to v2

On 10/03/17 17:49, Michael Johnson wrote:
> Yes, folks have recently deployed the dashboard with success.  I think 
> you had that discussion on the IRC channel, so I won't repeat it here.
> 
> Please note, the neutron-lbaas-dashboard does not support LBaaS v1, 
> you must have LBaaS v2 deployed for the neutron-lbaas-dashboard to 
> work.  If you are trying to use LBaaS v1, you can use the legacy 
> panels included in the older versions of horizon.
> 
[..CUT..]
> If you think there is an open bug for the dashboard, please report it 
> in https://bugs.launchpad.net/neutron-lbaas-dashboard



Hello,
I updated the bug
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1621403

Can anyone clarify the version matrix to use between the horizon version and
the neutron-lbaas-dashboard panels versions ?

can anyone confirm that both files
_1481_project_ng_loadbalancersv2_panel.py and file
_1480_project_loadbalancersv2_panel.py need to be installed ?

Is it okay to use branch master of neutron-lbaas-dashboard with horizon
stable/newton ?

thank you

Saverio
















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat][murano][app-catalog] The future of the App Catalog

2017-03-10 Thread Michael Glasgow

On 3/9/2017 6:08 AM, Thierry Carrez wrote:

Christopher Aedo wrote:

On Mon, Mar 6, 2017 at 3:26 AM, Thierry Carrez <thie...@openstack.org> wrote:

[...]
In parallel, Docker developed a pretty successful containerized
application marketplace (the Docker Hub), with hundreds of thousands of
regularly-updated apps. Keeping the App Catalog around (including its
thinly-wrapped Docker container Murano packages) make us look like we
are unsuccessfully trying to compete with that ecosystem, while
OpenStack is in fact completely complementary.


Without something like Murano "thinly wrapping" docker apps, how would
you propose current users of OpenStack clouds deploy docker apps?  Or
any other app for that matter?  It seems a little unfair to talk about
murano apps this way when no reasonable alternative exists for easily
deploying docker apps.  When I look back at the recent history of how
we've handled containers (nova-docker, magnum, kubernetes, etc) it
does not seem like we're making it easy for the folks who want to
deploy a container on their cloud...

[...]
I just think that adding the Murano abstraction in the middle of it and
using an AppCatalog-provided Murano-powered generic Docker container
wrapper is introducing unnecessary options and complexity -- options
that are strategically hurting us when we talk to those adjacent
communities...


I don't disagree with any of your observations thus far, but I'm curious 
what people think this portends for the future of Murano with respect to 
non-containerized workloads.


Let's assume for a moment that VMs aren't going away tomorrow.  Some 
won't agree, but I'm not sure that whole debate adds a lot of value here.


In that context, Murano is interesting to me because it seems like the 
OO-like abstraction it provides is the right layer at which to link 
application components for such workloads, where you have, say, a Fruit 
class that can be extended for Apples and Oranges, and any type of 
Animal can come along and consume any type of Fruit.  While not a 
panacea, there are some clear advantages to working at this layer 
relative to trying to link everything together at the level of Heat, for 
example.


For this strategy to work, a critical element will be driving 
standardization in those interfaces.  I had seen the App Catalog as a 
venue for driving that, not necessarily today but possibly at some point 
in the future.  It's not the *only* place to do that, and after batting 
it around with some of the guys here, I'm starting to think it's not 
even the best place to do it.  But it was a thought I had when first 
reading this thread.


It makes sense to me that for container workloads, the COE should handle 
all of this orchestration, and OpenStack should just get out of the way. 
 But in the case of VMs, Murano's abstraction seems useful and holds 
the promise of reducing overall complexity.  So if we truly believe that 
OpenStack and containers are complementary, it would be great if someone 
can articulate a vision for that relationship.


To be clear, I have no strong preference wrt the future of the App 
Catalog.  If anything, I'd lean toward retirement for all the reasons 
that have been given.  But I do wish that someone more familiar than me 
with this area could speak to the longer term vision for Murano. 
Granted it's an orthogonal concern, but clearly this decision will have 
some effects on its future.


--
Michael Glasgow

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2017-03-10 Thread Michael Johnson
Hi Syed,

 

To my knowledge the LBaaS team did not create any upgrade plan or tools to move 
load balancers from V1 to V2.  The data model is significantly different (and 
better) with V2 and I suspect that caused some challenges.

I know there was a, as-is, database conversion script contributed by an 
operator/packager that might help someone develop a migration path if their 
deployment wasn’t using one of the incompatible configurations, but that would 
only be one piece to the puzzle.

 

Since development beyond security fixes for v1 halted over two releases ago and 
the last of the v1 code will be removed from OpenStack in about 32 days (mitaka 
goes EOL 4/10/17) I think it is going to be left to the last few folks still 
running LBaaS v1 to plan their migrations.  Most of the LBaaS team from the 
time of v1 deprecation are no longer on the team so we don’t really have folks 
experienced with v1 available any longer.

 

I cannot speak to how hard or easy it would be to create a heat migration 
template to recreate the v1 load balancers under v2.

 

Beyond that, I can assure you that the migration from neutron-lbaas to octavia 
will have migration procedures and tools to automate the process.

 

Michael

 

From: Syed Armani [mailto:dce3...@gmail.com] 
Sent: Friday, March 10, 2017 1:58 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are 
weready?

 

Folks,

 

I am going to ask the question raised by Zane one more time:

 

Is there a migration plan for Heat users who have existing stacks containing 
the v1 resources?

 

Cheers,

Syed

 

On Thu, Aug 25, 2016 at 7:10 PM, Assaf Muller <as...@redhat.com 
<mailto:as...@redhat.com> > wrote:

On Thu, Aug 25, 2016 at 7:35 AM, Gary Kotton <gkot...@vmware.com 
<mailto:gkot...@vmware.com> > wrote:
> Hi,
> At the moment it is still not clear to me the upgrade process from V1 to V2. 
> The migration script https://review.openstack.org/#/c/289595/ has yet to be 
> approved. Does this support all drivers or is this just the default reference 
> implementation driver?

The migration script doesn't have a test, so we really have no idea if
it's going to work.


> Are there people still using V1?
> Thanks
> Gary
>
> On 8/25/16, 4:25 AM, "Doug Wiegley" <doug...@parksidesoftware.com 
> <mailto:doug...@parksidesoftware.com> > wrote:
>
>
> > On Mar 23, 2016, at 4:17 PM, Doug Wiegley <doug...@parksidesoftware.com 
> <mailto:doug...@parksidesoftware.com> > wrote:
> >
> > Migration script has been submitted, v1 is not going anywhere from 
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
> >
> > I’m thinking in this order:
> >
> > - remove jenkins jobs
> > - wait for heat to remove their jenkins jobs ([heat] added to this 
> thread, so they see this coming before the job breaks)
> > - remove q-lbaas from devstack, and any references to lbaas v1 in 
> devstack-gate or infra defaults.
> > - remove v1 code from neutron-lbaas
>
> FYI, all of the above have completed, and the final removal is in the 
> merge queue: https://review.openstack.org/#/c/286381/
>
> Mitaka will be the last stable branch with lbaas v1.
>
> Thanks,
> doug
>
> >
> > Since newton is now open for commits, this process is going to get 
> started.
> >
> > Thanks,
> > doug
> >
> >
> >
> >> On Mar 8, 2016, at 11:36 AM, Eichberger, German 
> <german.eichber...@hpe.com <mailto:german.eichber...@hpe.com> > wrote:
> >>
> >> Yes, it’s Database only — though we changed the agent driver in the DB 
> from V1 to V2 — so if you bring up a V2 with that database it should 
> reschedule all your load balancers on the V2 agent driver.
> >>
> >> German
> >>
> >>
> >>
> >>
> >> On 3/8/16, 3:13 AM, "Samuel Bercovici" <samu...@radware.com 
> <mailto:samu...@radware.com> > wrote:
> >>
> >>> So this looks like only a database migration, right?
> >>>
> >>> -Original Message-
> >>> From: Eichberger, German [mailto:german.eichber...@hpe.com 
> <mailto:german.eichber...@hpe.com> ]
> >>> Sent: Tuesday, March 08, 2016 12:28 AM
> >>> To: OpenStack Development Mailing List (not for usage questions)
> >>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
> >>>
> >>

Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-10 Thread Michael Johnson
Yes, folks have recently deployed the dashboard with success.  I think you
had that discussion on the IRC channel, so I won't repeat it here.

Please note, the neutron-lbaas-dashboard does not support LBaaS v1, you must
have LBaaS v2 deployed for the neutron-lbaas-dashboard to work.  If you are
trying to use LBaaS v1, you can use the legacy panels included in the older
versions of horizon.

The question asked is a very old question and unfortunately the "Ask" site
doesn't do search or notifications very well.  This question hasn't come up
on our notification lists.  Sigh.

If you think there is an open bug for the dashboard, please report it in
https://bugs.launchpad.net/neutron-lbaas-dashboard

Michael

-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Friday, March 10, 2017 8:04 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from
LBaaS v1 to v2

I spent the all day trying to deploy an Horizon instance with working panels
for LBaaSv2.
https://github.com/openstack/neutron-lbaas-dashboard

I tried stable/ocata and I am never able to list existing load balancers or
create a new loadbalancer.

Looks like I am not the only one with this issue:
https://ask.openstack.org/en/question/96790/lbaasv2-dashboard-issues/

Is there anyone that has a working setup ?

Should I open a bug here?
https://bugs.launchpad.net/octavia/+filebug

Thanks

Saverio


On 09/03/17 16:19, Saverio Proto wrote:
> Hello,
> 
> I managed to do the database migration.
> 
> I had to skip this logic:
> https://github.com/openstack/neutron-lbaas/blob/master/tools/database-
> migration-from-v1-to-v2.py#L342-L353
> 
> I had to force flag=True
> 
> That code obviously breaks if you have LBaaS used by more than 1 tenant.
> 
> What was the goal ? to make sure that a given healthmonitor is not 
> reused in multiple pools ?
> 
> Should the right approach be to check if these two values are the same ?:
> 
> select count(DISTINCT monitor_id) from poolmonitorassociations; select 
> count(monitor_id) from poolmonitorassociations;
> 
> Second question: should the old tables from LBaaSV1 be dropped ?
> 
> Please give me feedback so I can fix the code and submit a review.
> 
> thank you
> 
> Saverio
> 
> 
> On 09/03/17 13:38, Saverio Proto wrote:
>>> I would recommend experimenting with the 
>>> database-migration-from-v1-to-v2.py
>>> script and working with your vendor (if you are using a vendor load 
>>> balancing engine) on a migration path.
>>
>>
>> Hello,
>> there is no vendor here to help us :)
>>
>> I made a backup of the current DB.
>>
>> I identified this folder on our Neutron server:
>>
>> /usr/lib/python2.7/dist-packages/neutron_lbaas/db/migration ; tree .
>> |-- alembic_migrations
>> |   |-- env.py
>> |   |-- env.pyc
>> |   |-- __init__.py
>> |   |-- __init__.pyc
>> |   |-- README
>> |   |-- script.py.mako
>> |   `-- versions
>> |   |-- 364f9b6064f0_agentv2.py
>> |   |-- 364f9b6064f0_agentv2.pyc
>> |   |-- 4b6d8d5310b8_add_index_tenant_id.py
>> |   |-- 4b6d8d5310b8_add_index_tenant_id.pyc
>> |   |-- 4ba00375f715_edge_driver.py
>> |   |-- 4ba00375f715_edge_driver.pyc
>> |   |-- 4deef6d81931_add_provisioning_and_operating_statuses.py
>> |   |-- 4deef6d81931_add_provisioning_and_operating_statuses.pyc
>> |   |-- CONTRACT_HEAD
>> |   |-- EXPAND_HEAD
>> |   |-- kilo_release.py
>> |   |-- kilo_release.pyc
>> |   |-- lbaasv2.py
>> |   |-- lbaasv2.pyc
>> |   |-- lbaasv2_tls.py
>> |   |-- lbaasv2_tls.pyc
>> |   |-- liberty
>> |   |   |-- contract
>> |   |   |   |-- 130ebfdef43_initial.py
>> |   |   |   `-- 130ebfdef43_initial.pyc
>> |   |   `-- expand
>> |   |   |-- 3345facd0452_initial.py
>> |   |   `-- 3345facd0452_initial.pyc
>> |   |-- mitaka
>> |   |   `-- expand
>> |   |   |-- 3426acbc12de_add_flavor_id.py
>> |   |   |-- 3426acbc12de_add_flavor_id.pyc
>> |   |   |-- 3543deab1547_add_l7_tables.py
>> |   |   |-- 3543deab1547_add_l7_tables.pyc
>> |   |   |-- 4a408dd491c2_UpdateName.py
>> |   |   |-- 4a408dd491c2_UpdateName.pyc
>> |   |   |-- 62deca5010cd_add_tenant_id_index_for_l7_tables.py
>> |   |   |-- 62deca5010cd_add_tenant_id_index_for_l7_tables.pyc
>> |   |   |-- 6aee0434f911_independent_pools.py
>> |   |   `-- 6aee0434f911_independent_pools.pyc
>> |   |-- start_neutron_lbaas.py
>> |   `-- s

[openstack-dev] [ironic] [neutron] Should the ironic-neutron meeting start back up for pike?

2017-03-07 Thread Michael Turek

Hey all,

So at yesterday's ironic IRC meeting the question of whether or not the 
ironic neutron integration meeting should start back up. My 
understanding is that this meeting died down as it became more status 
oriented.


I'm wondering if it'd be worthwhile to kick it off again as 4 of pike's 
high priority items are neutron integration focused.


Personally it'd be a meeting I'd attend this cycle but I could 
understand if it's more trouble than it's worth.


Thoughts?

Thanks,
Mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-07 Thread Michael Johnson
Saverio,

Unfortunately back when LBaaS v1 was deprecated (liberty) no automated
migration path was developed to move from LBaaS v1 to v2.  Some manual
database migration scripts were contributed, but you may still have
incompatible v1 load balancers that require manual intervention.  There is a
note about this in the networking guide:
https://docs.openstack.org/ocata/networking-guide/config-lbaas.html

I would recommend experimenting with the database-migration-from-v1-to-v2.py
script and working with your vendor (if you are using a vendor load
balancing engine) on a migration path.

Michael

-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Tuesday, March 7, 2017 9:35 AM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from
LBaaS v1 to v2

Hello Michael,

thanks. Using your email I read this page:
https://docs.openstack.org/ocata/networking-guide/config-lbaas.html

It is still not clear to me if the command:

neutron-db-manage --subproject neutron-lbaas upgrade head

Will make the necessary database migrations from LBaaS v1 to v2 ?

Does this command triggers the execution of this code ?
https://github.com/openstack/neutron-lbaas/blob/master/tools/database-migrat
ion-from-v1-to-v2.py

On what openstack version should I run that neutron-db-manage command ?
I am currently in Mitaka.

Here I read:
https://docs.openstack.org/releasenotes/neutron-lbaas/newton.html
LBaaS API v1 has been removed. Do not upgrade before migrating to LBaaS API
v2.

This means I have to run 'neutron-db-manage --subproject neutron-lbaas
upgrade head' before upgrading ?

am I missing the page where the migration from V1 to V2 is explained ?

thank you

Saverio


On 07/03/17 17:33, Michael Johnson wrote:
> Hi Saverio,
> 
> I think the confusion is coming from neutron/neutron-lbaas/octavia.
> 
> Neutron-lbaas, prior to the Ocata series was a sub-project of neutron 
> and as such has it's own release notes:
> https://docs.openstack.org/releasenotes/neutron-lbaas/
> 
> As of Ocata, neutron-lbaas is part of the Octavia project
> (https://governance.openstack.org/tc/reference/projects/octavia.html) 
> and is no longer a sub-project of neutron.  In fact, we are actively 
> working to merge the neutron-lbaas v2 API into the Octavia API to 
> create a combined project.
> 
> Going forward you will probably want to monitor both neutron-lbaas and 
> the octavia release notes:
> https://docs.openstack.org/releasenotes/neutron-lbaas/
> https://docs.openstack.org/releasenotes/octavia/
> 
> To answer your original question, the LBaaS v1 API as removed in the 
> newton release of neutron-lbaas 
> (https://docs.openstack.org/releasenotes/neutron-lbaas/newton.html).
> 
> Michael
> 
> 
> -Original Message-
> From: Saverio Proto [mailto:saverio.pr...@switch.ch]
> Sent: Tuesday, March 7, 2017 1:09 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade 
> from LBaaS v1 to v2
> 
> Hello,
> 
> I am upgrading from Mitaka to Newton.
> 
> our Openstack cloud has in production LBaaSv1.
> 
> I read all the following release notes:
> 
> https://docs.openstack.org/releasenotes/neutron/liberty.html
> https://docs.openstack.org/releasenotes/neutron/mitaka.html
> https://docs.openstack.org/releasenotes/neutron/newton.html
> 
> In the liberty release notes I read:
> "The LBaaS V1 API is marked as deprecated and is planned to be removed 
> in a future release. Going forward, the LBaaS V2 API should be used."
> 
> But which one is the release that drops LBaaS V1 ?
> 
> I see this script is merged in stable/newton:
> 
> https://review.openstack.org/#/c/289595/
> 
> Can I still use LBaaS V1 in newton and do the migration before 
> upgrading to Ocata ?
> 
> The cherry-pick to Mitaka was abandoned:
> https://review.openstack.org/#/c/370103/
> 
> The Ocata release notes again dont say anything about LBaaS:
> https://docs.openstack.org/releasenotes/neutron/ocata.html
> 
> thank you
> 
> Saverio
> 
> 
> 
> --
> SWITCH
> Saverio Proto, Peta Solutions
> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 
> 15, direct +41 44 268 1573 saverio.pr...@switch.ch, 
> http://www.switch.ch
> 
> http://www.switch.ch/stories
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac

Re: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from LBaaS v1 to v2

2017-03-07 Thread Michael Johnson
Hi Saverio,

I think the confusion is coming from neutron/neutron-lbaas/octavia.

Neutron-lbaas, prior to the Ocata series was a sub-project of neutron and as
such has it's own release notes:
https://docs.openstack.org/releasenotes/neutron-lbaas/

As of Ocata, neutron-lbaas is part of the Octavia project
(https://governance.openstack.org/tc/reference/projects/octavia.html) and is
no longer a sub-project of neutron.  In fact, we are actively working to
merge the neutron-lbaas v2 API into the Octavia API to create a combined
project.

Going forward you will probably want to monitor both neutron-lbaas and the
octavia release notes:
https://docs.openstack.org/releasenotes/neutron-lbaas/
https://docs.openstack.org/releasenotes/octavia/

To answer your original question, the LBaaS v1 API as removed in the newton
release of neutron-lbaas
(https://docs.openstack.org/releasenotes/neutron-lbaas/newton.html).

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Tuesday, March 7, 2017 1:09 AM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [Neutron][LBaaS] - Best release to upgrade from
LBaaS v1 to v2

Hello,

I am upgrading from Mitaka to Newton.

our Openstack cloud has in production LBaaSv1.

I read all the following release notes:

https://docs.openstack.org/releasenotes/neutron/liberty.html
https://docs.openstack.org/releasenotes/neutron/mitaka.html
https://docs.openstack.org/releasenotes/neutron/newton.html

In the liberty release notes I read:
"The LBaaS V1 API is marked as deprecated and is planned to be removed in a
future release. Going forward, the LBaaS V2 API should be used."

But which one is the release that drops LBaaS V1 ?

I see this script is merged in stable/newton:

https://review.openstack.org/#/c/289595/

Can I still use LBaaS V1 in newton and do the migration before upgrading to
Ocata ?

The cherry-pick to Mitaka was abandoned:
https://review.openstack.org/#/c/370103/

The Ocata release notes again dont say anything about LBaaS:
https://docs.openstack.org/releasenotes/neutron/ocata.html

thank you

Saverio



--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15,
direct +41 44 268 1573 saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Octavia PTG summary - Octavia team discussions (e-mail 2 of 2)

2017-03-06 Thread Michael Johnson
Some of the Octavia team attended the first OpenStack Project Team Gathering
(PTG) held in Atlanta the week of February 27th.  Below is a summary of the
notes we kept in the Octavia etherpad here:
https://etherpad.openstack.org/p/octavia-ptg-pike

This e-mail details discussions we had about Octavia specific topics.  A
second e-mail will cover topics the Octavia team discussed with the
cross-project teams.

Michael

Active/Active

  * This is a priority for Pike.  Cores will be actively reviewing these
patches.
  * We need to make sure there is good velocity for comments getting
addressed.

Amphora in containers

  * We need to work on this in Pike.  We cannot implement upgrade tests
under the current gate host limitations without containers being available.
  * Currently nova-lxd looks to be the shortest path for octavia.
  * It was noted that Docker can now support hot-plugging networks.
  * There is community interest in using Docker for the amphora.
  * There was interest in booting amphora via kubernetes, but we need to
research the implications and OpenStack support more.

Barbican

  * We discussed Octavia's ongoing use of barbican and our need for a
cascading ACL API.  This will greatly simplify the user experience for TLS
offloading load balancers via Octavia.  I captured the need in a barbican
bug here: https://bugs.launchpad.net/barbican/+bug/1666963
  * The barbican team requested that we do a bug scrub through the barbican
bugs to pull out any LBaaS/Octavia bugs that may be mis-filed or out of
date.  Michael (johnsom) will do that.

Container networking

  * Currently using neutron-lbaas with kubernetes, but planning to move to
Octavia.
  * Interested to use the L7 capability to save public IPs.

Documentation

  * Documentation is a priority for Pike!
  * API-REF is work in progress (johnsom):
https://review.openstack.org/#/q/topic:bug/1558385
  * API-guide we are deferring until after Pike
  * Upgrade guide is targeted for Pike (diltram/johnsom)
  * HA guide is targeted for Pike (diltram/johnsom)
  * OpenStack Ansible guide is targeted for Pike (xgerman)
  * Detailed setup guide is targetd for Pike (KeithMnemonic)
  * Admin guide we are deferring until after the documentation team spins it
out into the project repositories (https://review.openstack.org/439122)
  * Networking guide - discuss with john-davidge about a link out to Octavia
documentation.
  * Developer guide started, maybe Pike? (sindhu):
https://review.openstack.org/#/c/410419/
  * OpenStack Client guide - is this autogenerated?
  * User guide - Cookbooks kind of cover this, defer additional work until
after Pike
  * Troubleshooting guide - Pike?  (KeithMnemonic/johnsom)
  * Monitoring guide - defer until after Pike
  * Operator guide?
  * Runbooks?

Dragonflow

  * The dragonflow team wanted to meet with the Octavia team to discuss
integration points.  We talked about how dragonflow could be used with
Octavia and how offset style L7 rules might be implemented with dragonflow.
  * We gave an overview of how a dragonflow load balancing driver could be
added to Octavia.
  * We also asked if dragonflow would have the same issues as DVR does with
VRRP and provided a use case.

Drivers / Providers

  * Octavia provider support will be implemented via the named extension
model
  * Providers will be implemented as handlers like the existing noop and
Octavia driver are implemented.
  * Change in the interaction model with barbican (octavia service account
is authorized for barbican containers and content) means that octavia will
need to pass the required secure content to the providers.  This may be
different than how vendor drivers handled this content in neutron-lbaas, but
the old model should still be available to the vendor if they choose to not
adopt the new parameters passed to the handler.
  * Octavia will implement a new endpoint spawned from the health manager
processes that will receive statistics, status, and agent health information
from vendor agents or drivers.  This will establish a stable and scalable
interface for drivers to update this information.
o Proposed using a similar UDP endpoint to how the amps are reporting
this information.
o Would use a per-provider key for signing, seeded with the agent ID
o An method for agents to query the list of available agent health
endpoints will need to be created to allow the vendor agents to update their
list of available endpoints.
  * Octavia will add an agent database table similar to the neutron agents
idea.
  * Vendor agents can update this information via the above mechanism
providing the operator with visibility to the vendor agent health.
  * Octavia may include the octavia processes in the list of agents.
  * We need to understand if any of the vendor drivers are considering using
the driver shim (neutron-lbaas translation layer) or if they are all going
to update to the octavia handler API.

Flavors

  * It appears that the flavors spec is stalled:
https

[openstack-dev] [octavia] Octavia PTG summary - Cross-project teams (e-mail 1 of 2)

2017-03-06 Thread Michael Johnson
Some of the Octavia team attended the first OpenStack Project Team Gathering
(PTG) held in Atlanta the week of February 27th.  Below is a summary of the
notes we kept in the Octavia etherpad here:
https://etherpad.openstack.org/p/octavia-ptg-pike

This e-mail details discussions we had with the cross-project teams.  A
follow-up e-mail will cover topics the Octavia team covered.

Sorry this is a bit long.  Octavia collaborates with a lot of OpenStack!  I
want to thank the cross-project teams for the warm and supportive reception
the Octavia team received.

Michael

Documentation team

  * Attended the discussion about "distributed documentation repositories"
to voice our support for this model.  Designate was also vocal in their
support of moving more of the documentation into the project repositories.
  * I proposed an approach for managing distributed documentation
repositories similar to how i18n manages localization for the projects.  We
agreed to pursue moving the administrator guide to a distributed model like
the installation guide.  I volunteered to capture the discussion from the
room in a docs spec which is here: https://review.openstack.org/#/c/439122/
  * We attended the discussion about the "HA guide".  The discussion in the
room was to keep the main HA guide a high-level guide and allow links out to
the project specific HA guides, such as the one we are planning to write for
Octavia in Pike.
  * Octavia documentation is a priority for Pike (discussion in the octavia
specific email).

Hierarchical quotas

  * The proposal is to store the quota limits in keystone.
  * The projects would still enforce the quotas and handle usage.
  * Octavia team should track this, but I don't think we will implement this
in Pike

Horizon team
  * The horizon team has lost a lot of developers.  They are down from
twelve active cores to three.
  * They are happy to help us with the dashboard, but do not have developer
resource available to work on the neutron-lbaas-dashboard
  * AngularJS is still the path forward for horizon plugins
  * The horizon team will be using the [horizon-plugin] tag for e-mails to
the mailing list that are relevant to teams with a horizon plugin.
  * There are currently test framework gaps for the AngularJS plugins.  They
will be re-examining options.
  * I inquired about the future of quotas in horizon as octavia will now
have quotas separate from neutron's.  It sounds like the quotas in horizon
need some work.  We (octavia) will defer integrating the octavia quotas into
horizon to a future release.

Neutron

  * The DVR "unbound allowed_address_pair with floatingip" bug that has been
open a few cycles is getting attention again
(https://bugs.launchpad.net/neutron/+bug/1583694 among others).
  * Swami has some new DVR patches: https://review.openstack.org/320669 and
https://review.openstack.org/#/c/323618 that he would like us to look at.
  * We provided Swami a list of Octavia/neutron-lbaas scenario tests that
fail with DVR enabled.
  * We discussed the use of the "device owner" property of the ports and the
DVR coding to look for "lbaas" and/or "" device owners in the above patches.
  * Kevin was interested in hearing about the neutron-lbaas pass through
proxy (lbaasv2-proxy) work.  This is the neutron extension that allows load
balancing requests to still be made via neutron for some time (deprecation
cycle).  German has a patch up for review:
https://review.openstack.org/#/c/418530
  * We also had a brief discussion about the plan for the OpenStack Client
Octavia plugin planned for Pike.

OpenStack Ansible (OSA)

  * Octavia patch is up for review: https://review.openstack.org/417210
  * xgerman worked with the OSA team on the patch and tying up the loose
ends.
  * OSA is discussing giving German core on the OSA octavia repo.
  * OSA is investigating alternative packaging format e.g. Snaps (Ubuntu
snappy), Helm, etc.

OpenStack Client (OSC)

  * Dean Troyer and the OSC team were kind enough to spend some time with us
and make sure we are on the right track for our client in Pike.
  * We decided that we will be creating a python-octaviaclient repository
for our OSC plugin.  (request has already been submitted)
  * We discussed how octavia should fit into the OSC terminology and agreed
that the following is a good approach
o loadbalancer (create, etc.)
o loadbalancer listener (CRUD)
o loadbalancer pool (CRUD)
o loadbalancer member (CRUD)
o loadbalancer healthmonitor (CRUD)
o loadbalancer quota? (CRUD) -> can be included in the common quotas
o loadbalancer l7policy (CRUD)
o loadbalancer l7rule (CRUD)
o loadbalancer flavor

Pike Goals

  * Python 3.5 support
o Work has started.  Octavia functional tests still need work.
Neutron-lbaas tests need work.
  * Control plane API via WSGI
o No work for neutron-lbaas as it is under the neutron API.
o Octavia API is already WSGI, just

Re: [openstack-dev] [ironic] Boot from Volume meeting?

2017-02-28 Thread Michael Turek

Hey Julia,

I like the idea of a using the old neutron/ironic meeting time as 
general purpose meeting time slot. As the usage of the meeting changes, 
would new the same meeting name and agenda page be used, or would 
subteams rename the meeting and create a new agenda page? Personally I 
would prefer the latter.


So the first meeting will be 3/06/17?

Thanks,
Mike

On 02/28/2017 09:42 AM, Julia Kreger wrote:

Greetings fellow ironic humanoids!

As many have known, I've been largely attempting to drive Boot from
Volume functionality in ironic over the past two years.  Largely, in a
slow incremental approach, which is in part due to how I perceived it
to best fit into the existing priorities when the discussions started.

During PTG there was quite the interest by multiple people to become
involved and attempt to further Boot from Volume forward this cycle. I
would like to move to having a weekly meeting with the focus of
integrating this functionality into ironic, much like we did with the
tighter neutron integration.

I have two reasons for proposing a new meeting:

* Detailed technical status updates and planning/co-ordination would
need to take place. This would functionally be noise to a large number
of contributors in the ironic community.

* Many of these details would need need to be worked out prior to the
first part of the existing ironic meeting for the weekly status
update. The update being a summary of the status of each sub team.

With that having been said, I'm curious if we could re-use the
ironic-neutron meeting time slot [0] for this effort.  That meeting
was cancelled just after the first of this year [1].  In it's place I
think we should have a general purpose integration meeting, that could
be used as a standing meeting, specifically reserved at this time for
Boot from Volume work, but could be also by any integration effort
that needs time to sync-up in advance of the existing meeting.

-Julia

[0] 
http://git.openstack.org/cgit/openstack-infra/irc-meetings/tree/meetings/ironic-neutron-integration-meeting.yaml
[1] http://lists.openstack.org/pipermail/openstack-dev/2017-January/109536.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ansible]Octavia ansible script

2017-02-28 Thread Michael Johnson
Hi Santhosh,

 

The correct path to the git repo is: 
http://git.openstack.org/cgit/openstack/openstack-ansible-os_octavia/

 

Though at this point the code has not merged, so you will need to pull from the 
patch if you want to try it out:

https://review.openstack.org/#/c/417210/

 

Michael

 

 

From: Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com] 
Sent: Monday, February 27, 2017 7:15 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [ansible]Octavia ansible script

 

Thanks Major Hayden,

 

Hello German,

 

I don't have access to repo 
git://git.openstack.org/openstack/openstack-ansible-os_octavia 
<http://git.openstack.org/openstack/openstack-ansible-os_octavia> 



Can you provide us access ?

Thanks,
Santhosh




 

 

 

On Tue, Feb 7, 2017 at 7:37 PM, Major Hayden <major.hay...@rackspace.com 
<mailto:major.hay...@rackspace.com> > wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 02/07/2017 12:00 AM, Santhosh Fernandes wrote:
> Can we know the status of octavia ansible script ?
>
> Link :- https://blueprints.launchpad.net/~german-eichberger 
> <https://blueprints.launchpad.net/%7Egerman-eichberger>
>
> Is there any version available for beta testing. Can you provide us link? or 
> time line of availability.

Hello Santosh,

Although I drafted the spec[0], German Eichberger has taken over the work on 
the WIP patchset[1].  He would be the best person to discuss timelines and 
remaining work to be done.

[0] 
http://specs.openstack.org/openstack/openstack-ansible-specs/specs/mitaka/lbaasv2.html
[1] https://review.openstack.org/#/c/417210/

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJYmdSOAAoJEHNwUeDBAR+xkh0P/25yqkYIIxPuO/uvV+jNdiny
NMxNClMfNxpKagCjokJyoMvyVDVX0VR71RloEeigOrTGTP7goAotn99J0pUK+je/
X7zU7POwqV92mAj/3gU7uWm1792EZNCWNpnd9IQiik9PfEcLPmmW1FZeuxyY/l8K
ZE3VOAId0lHaZYbHXR9GCLzy5QwwXM1kg1+Ub1ivIbU3Q81Ais3L64KXLth7ahtu
5dIaCAKZ6uqOVRe336kI9aYPv5N4Fpwt5OkZUdGf4iNc/fivAjrGxaLt9H0ldZJQ
lsbOl1wtjlYJwreQWVGaNBEx/F1UZocnnvzUe9vAUIY2leTZ4eQck16fEkbkRe6b
Zl+o/GVh0mwS+IBjZcilJxQ7PoOX/07Z2wZOHuy8ihUIkM/L2ySP3TBWImv5a5H0
eQW1uK1B45j68E61oEuyW9DvNCWNTltUwD/FQNk833vFAtv35eqMRF1vhx3pPwmO
GI1SQC55n0q96DF+5JedkAVy3qXwgt4CQwxvku8meD0hFb7XpWwy5DBd5p4ZbBb4
XpjlsGkLzBK0uyLPyXaZ0LbFJ3Czp68Gbys09yLxjGI+P+PFWuWGVgoL/+FV9XA2
H7St0aFZJgM0cLFYYQF1ols48SbPUp3HchexaXgltMfGYy2A3x/nnbEJSPtH7Vp9
V9TEomffspXHgMQ2U3R5
=BmgA
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe> 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-19 Thread Michael Still
Config drive over read-only NFS anyone?

Michael

On Sun, Feb 19, 2017 at 6:12 AM, Steve Gordon <sgor...@redhat.com> wrote:

> - Original Message -
> > From: "Artom Lifshitz" <alifs...@redhat.com>
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Sent: Saturday, February 18, 2017 8:11:10 AM
> > Subject: Re: [openstack-dev] [nova] Device tagging: rebuild config drive
> upon instance reboot to refresh metadata on
> > it
> >
> > In reply to Michael:
> >
> > > We have had this discussion several times in the past for other
> reasons.
> > > The
> > > reality is that some people will never deploy the metadata API, so I
> feel
> > > like we need a better solution than what we have now.
> >
> > Aha, that's definitely a good reason to continue making the config
> > drive a first-class citizen.
>
> The other reason is that the metadata API as it stands isn't an option for
> folks trying to do IPV6-only IIRC.
>
> -Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Reminder Boston Summit voting ends next week

2017-02-17 Thread Michael Johnson

Voting for presentations closes next Tuesday/Wednesday (TUESDAY, FEBRUARY 21
AT 11:59PM PST / WEDNESDAY, FEBRUARY 22 AT 6:59AM UTC).

If you have not already voted for sessions of interest, please do here:
https://www.openstack.org/summit/boston-2017/vote-for-speakers

Somehow this announcement e-mail got grouped under the [OpenStack Marketing]
tag for me so I didn't see it to mention it in the meeting announcements.

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-17 Thread Michael Still
We have had this discussion several times in the past for other reasons.
The reality is that some people will never deploy the metadata API, so I
feel like we need a better solution than what we have now.

However, I would consider it probably unsafe for the hypervisor to read the
current config drive to get values, and persisting things like the instance
root password in the Nova DB sounds like a bad idea too.

Michael




On Feb 18, 2017 6:29 AM, "Artom Lifshitz" <alifs...@redhat.com> wrote:

Early on in the inception of device role tagging, it was decided that
it's acceptable that the device metadata on the config drive lags
behind the metadata API, as long as it eventually catches up, for
example when the instance is rebooted and we get a chance to
regenerate the config drive.

So far this hasn't really been a problem because devices could only be
tagged at instance boot time, and the tags never changed. So the
config drive was pretty always much up to date.

In Pike the tagged device attachment series of patches [1] will
hopefully merge, and we'll be in a situation where device tags can
change during instance uptime, which makes it that much more important
to regenerate the config drive whenever we get a chance.

However, when the config drive is first generated, some of the
information stored in there is only available at instance boot time
and is not persisted anywhere, as far as I can tell. Specifically, the
injected_files and admin_pass parameters [2] are passed from the API
and are not stored anywhere.

This creates a problem when we want to regenerated the config drive,
because the information that we're supposed to put in it is no longer
available to us.

We could start persisting this information in instance_extra, for
example, and pulling it up when the config drive is regenerated. We
could even conceivably hack something to read the metadata files from
the "old" config drive before refreshing them with new information.
However, is that really worth it? I feel like saying "the config drive
is static, deal with it - if you want to up to date metadata, use the
API" is an equally, if not more, valid option.

Thoughts? I know y'all are flying out to the PTG, so I'm unlikely to
get responses, but I've at least put my thoughts into writing, and
will be able to refer to them later on :)

[1] https://review.openstack.org/#/q/status:open+topic:bp/virt-
device-tagged-attach-detach
[2] https://github.com/openstack/nova/blob/master/nova/virt/
libvirt/driver.py#L2667-L2672

--
Artom Lifshitz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-17 Thread Michael Johnson
+1

 

Thanks for setting this up,

 

Michael

 

From: Kevin Benton [mailto:ke...@benton.pub] 
Sent: Friday, February 17, 2017 11:19 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

 

Hi all,

 

I'm organizing a Neutron social event for Thursday evening in Atlanta somewhere 
near the venue for dinner/drinks. If you're interested, please reply to this 
email with a "+1" so I can get a general count for a reservation.

 

Cheers,

Kevin Benton

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][sdk] service name for octavia

2017-02-15 Thread Michael Johnson
Funny,  I had posted a patch to fix the devstack plugin earlier the same day
you sent this e-mail.

https://review.openstack.org/#/c/433817/

I also found that it was using "octavia" for the service type and wanted to
fix it.

I had picked "loadbalancing" but I am fine with "load-balancing" as well.  I
will update the patch.

Michael


-Original Message-
From: Qiming Teng [mailto:teng...@linux.vnet.ibm.com] 
Sent: Tuesday, February 14, 2017 5:09 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [octavia][sdk] service name for octavia

When reviewing a recent patch that adds openstacksdk support to octavia, I
found that octavia is using 'octavia' as its service name instead of
'loadbalancing' or 'loadbalancingv2' or something similar.

The overall suggestion is to use a word/phrase that indicates what a service
do instead of the name of the project providing that service.

Below is the list of the service types currently supported by
openstacksdk:

'alarming',# aodh
'baremetal',   # ironic
'clustering',  # senlin
'compute', # nova
'database',# trove
'identity',# keystone
'image',   # glance
'key-manager', # barbican
'messaging',   # zaqar
'metering',# ceilometer
'network', # neutron
'object-store',   # swift
'octavia',# <--- this is an exception
'orchestration',  # heat
'volume', # cinder
'workflowv2', # mistral

While I believe this has been discussed about a year ago, I'm not sure if
there are things we missed so I'm brining this issue to a broader audience
for discussion.

Reference:

[1] Patch to python-openstacksdk:
https://review.openstack.org/#/c/428414
[2] Octavia service naming:
http://git.openstack.org/cgit/openstack/octavia/tree/devstack/plugin.sh#n52

Regards,
 Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-02-09 Thread michael mccune

Greetings OpenStack community,

This week's meeting[0] was relatively light, with some discussion about 
the recently released Ethercalc[4], and the first draft of the API 
compatiblity guideline[5] (many thanks to Chris Dent). The compatibility 
guideline lays out some concrete standards by which projects can 
definitely determine if they are following the API guidelines in 
general, and how closely their efforts track. This work is part of the 
foundation for creating an API compatiblity tag which projects can apply 
for.


One reminder: Projects should make sure that their liaison information 
is up to date at 
http://specs.openstack.org/openstack/api-wg/liaisons.html . If not, 
please provide a patch to doc/source/liaisons.json to update it.


One guideline was frozen this week.

# Newly Published Guidelines

* Add guideline for invalid query parameters
  https://review.openstack.org/#/c/417441/
* Add guidelines on usage of state vs. status
  https://review.openstack.org/#/c/411528/
* Clarify the status values in versions
  https://review.openstack.org/#/c/411849/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Add guidelines for boolean names
  https://review.openstack.org/#/c/411529/

# Guidelines Currently Under Review [3]

* Define pagination guidelines
  https://review.openstack.org/#/c/390973/

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/

* Refactor and re-validate api change guidelines
  https://review.openstack.org/#/c/421846/

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address 
your concerns in an email to the OpenStack developer mailing list[1] 
with the tag "[api]" in the subject. In your email, you should include 
any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.


To learn more about the API WG mission and the work we do, see OpenStack 
API Working Group [2].


Thanks for reading and see you next week!

# References

[0] 
http://eavesdrop.openstack.org/meetings/api_wg/2017/api_wg.2017-02-09-16.00.html

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] 
https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

[4] https://ethercalc.openstack.org/
[5] https://review.openstack.org/#/c/421846/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] IRC Mishaps

2017-02-08 Thread Michael Still
At a previous employer we had a policy that all passwords started with "/"
because of the sheer number of times someone typed the root password into a
public IRC channel.

Michael

On Thu, Feb 9, 2017 at 10:04 AM, Jay Pipes <jaypi...@gmail.com> wrote:

> On 02/08/2017 03:36 PM, Kendall Nelson wrote:
>
>> Hello All!
>>
>> So I am sure we've all seen it: people writing terminal commands into
>> our project channels, misusing '/' commands, etc. But have any of you
>> actually done it?
>>
>> If any of you cores, ptls or other upstanding members of our wonderful
>> community have had one of these embarrassing experiences please reply! I
>> am writing an article for the SuperUser trying to make us all seem a
>> little more human to people new to the community and new to using IRC.
>> It can be scary asking questions to such a large group of smart people
>> and its even more off putting when we make mistakes in front of them.
>>
>> So please share your stories!
>>
>
> Hi!
>
> I can't tell you the number of times I've typed or pasted one of these:
>
> :wq
>
> or
>
> 1407gg
>
> or
>
> ggVG
>
> or
>
> :find nova/compute/resource_tracker.py
>
> or
>
> Vgq
>
> or
>
> tox -py27 -- --failing
>
> Such is the life of a keyboard-driven contributor I guess! :)
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] FFE for novajoin in TripleO's undercloud

2017-02-02 Thread Michael Still
What version of nova is tripleo using here? This wont work quite right if
you're using Mitaka until https://review.openstack.org/#/c/427547/ lands
and is released.

Also, I didn't know novajoin existed and am pleased to have discovered it.

Michael



On Fri, Feb 3, 2017 at 11:27 AM, Juan Antonio Osorio <jaosor...@gmail.com>
wrote:

> Hello,
>
> I would like to request an FFE to properly support the novajoin vendordata
> plugin in TripleO. Most of the work has landed, however, we still need to
> add it to TripleO's CI in order to have it officially supported.
>
> This is crucial for TLS-everywhere configuration's usability, since it
> makes it easier to populate the required field's in the CA (which in our
> case is FreeIPA). I'm currently working on a patch to add it to the
> fakeha-caserver OVB job; which, after this is done, I hope to move from the
> experimental queue, to the periodic one.
>
> BR
>
> --
> Juan Antonio Osorio R.
> e-mail: jaosor...@gmail.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] To rootwrap or piggyback privsep helpers?

2017-01-25 Thread Michael Still
I think #3 is the right call for now. The person we had working on privsep
has left the company, and I don't have anyone I could get to work on this
right now. Oh, and we're out of time.

Michael

On Thu, Jan 26, 2017 at 3:49 PM, Matt Riedemann <mriede...@gmail.com> wrote:

> The patch to add support for ephemeral storage with the Virtuozzo config
> is using the privsep helper from os-brick to run a new ploop command as
> root:
>
> https://review.openstack.org/#/c/312488/
>
> I've objected to this because I'm pretty sure this is not how we intended
> to be using privsep in Nova. The privsep helper in os-brick should be for
> privileged commands that os-brick itself needs to run, and was for things
> that used to have to be carried in both nova and cinder rootwrap filters.
>
> I know we also want new things in nova that require root access to execute
> commands to run privsep, but we haven't had anything do that yet, and we've
> said we'd like an example before making it a hard rule. But we're finding
> it hard to put our foot down on the first one (I remember we allowed
> something in with rootwrap in Newton because we didn't want to block on
> privsep).
>
> With feature freeze coming up tomorrow, however, I'm now torn on how to
> handle this. The options I see are:
>
> 1. Block this until it's properly using privsep in Nova, effectively
> killing it's chances to make Ocata.
>
> 2. Allow the patch as-is with how it's re-using the privsep helper from
> os-brick.
>
> 3. Change the patch to just use rootwrap with a new compute.filters entry,
> no privsep at all - basically how we used to always do this stuff.
>
> In the interest of time, and not seeing anyone standing up to lead the
> charge on privsep conversion in Nova in the immediate future, I'm learning
> toward just doing #3 but wanted to get other opinions.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc]

2017-01-24 Thread Michael Gale
Hello Bernard,
I believe the design docs and API parts are good and once I had the
environment up and running I didn't have any problems following the
examples or running the commands.

My biggest hurdle was around getting the devstack environment functioning,
I was following the steps here:
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining

I think the issues are related to using Ubuntu 16.04 instead of Ubuntu
14.04 and devstack now recommends 16.04. So the OVS steps seem out of place
and in my local.conf file I needed to set:
--snip--
export SFC_UPDATE_OVS=False
# Disable security groups
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
enable_plugin networking-sfc git://
git.openstack.org/openstack/networking-sfc stable/newton
--snip--

This could be related to my lack of experience with Devstack, also I was
concerned with having to set SFC_UPDATE_OVS=False in the configuration. Is
this affecting the underlying functionality of SFC.

Also the link to the Horizon add-on would be great.

Thanks
Michael


On Tue, Jan 24, 2017 at 6:30 AM, Bernard Cafarelli <bcafa...@redhat.com>
wrote:

> On 20 January 2017 at 00:06, Michael Gale <gale.mich...@gmail.com> wrote:
> > Hello,
> >
> > Are there updated install docs for sfc? The only install steps for a
> > testbed I can find are here and they seem outdated:
> > https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining
> There is also a SFC chapter in the networking guide:
> http://docs.openstack.org/newton/networking-guide/config-sfc.html
>
> Which parts do you find outdated? Some references to Ubuntu/OVS
> versions may need a cleanup, but the design and API parts should still
> be OK
> (OSC client, SFC graph API, symmetric ports and other goodies are
> still under review and not yed merged)
>
> > Also from the conference videos there seems to be some Horizon menu /
> > screens that are available?
> Not for networking-sfc directly, but there is a SFC tab in the tacker
> horizon plugin (or will be, someone from the tacker team can confirm
> that)
>
>
> --
> Bernard Cafarelli
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

“The Man who says he can, and the man who says he can not.. Are both
correct”
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Nominating German Eichberger for Octavia core reviewer

2017-01-23 Thread Michael Johnson
With that vote we have quorum.  Welcome back German!

 

Michael

 

 

From: Kosnik, Lubosz [mailto:lubosz.kos...@intel.com] 
Sent: Sunday, January 22, 2017 12:24 PM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [octavia] Nominating German Eichberger for
Octavia core reviewer

 

+1, welcome back. 

 

Lubosz

 

On Jan 20, 2017, at 2:11 PM, Miguel Lavalle <mig...@mlavalle.com
<mailto:mig...@mlavalle.com> > wrote:

 

Well, I don't vote here but it's nice to see German back in the community.
Welcome!

 

On Fri, Jan 20, 2017 at 1:26 PM, Brandon Logan <brandon.lo...@rackspace.com
<mailto:brandon.lo...@rackspace.com> > wrote:

+1, yes welcome back German.

On Fri, 2017-01-20 at 09:41 -0800, Michael Johnson wrote:
> Hello Octavia Cores,
>
> I would like to nominate German Eichberger (xgerman) for
> reinstatement as an
> Octavia core reviewer.
>
> German was previously a core reviewer for Octavia and neutron-lbaas
> as well
> as a former co-PTL for Octavia.  Work dynamics required him to step
> away
> from the project for a period of time, but now he has moved back into
> a
> position that allows him to contribute to Octavia.  His review
> numbers are
> back in line with other core reviewers [1] and I feel he would be a
> solid
> asset to the core reviewing team.
>
> Current Octavia cores, please respond with your +1 vote or an
> objections.
>
> Michael
>
> [1] http://stackalytics.com/report/contribution/octavia-group/90
>
>
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubs> 
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe> 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org> ?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] PTL candidacy for Pike series

2017-01-23 Thread Michael Johnson

Hello Octavia folks,

I wanted to let you know that I am running for the PTL position again for
Pike.

My candidacy statement is available here:
https://git.openstack.org/cgit/openstack/election/plain/candidates/pike/Octa
via/johnsom.txt

Thank you for your consideration,

Michael



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ocatvia]Newton Octavia lbaas creation error

2017-01-23 Thread Michael Johnson
Santhosh,

 

>From the traceback below it looks like the neutron process is unable to access 
>keystone.

 

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource DriverError: Driver 
error: Unable to establish connection to http://127.0.0.1:5000/v2.0/tokens: 
HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: 
/v2.0/tokens (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 111] 
ECONNREFUSED',))

 

So, I would check the neutron.conf settings for keystone like the user/password 
and that the neutron process can reach keystone on http://127.0.0.1:5000  Maybe 
there is a bad security group or keystone isn’t running?

 

Michael



 

From: Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com] 
Sent: Sunday, January 22, 2017 10:48 AM
To: openstack-dev@lists.openstack.org; Michael Johnson <johnso...@gmail.com>
Subject: [openstack-dev][ocatvia]Newton Octavia lbaas creation error

 

Hi all,

 

I am getting driver connection error while creation the LB from octavia. 

 

Stack trace - 

 

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
[req-c6f19e4c-dfbd-4b1c-8198-925d05f9fcdf cf13e167c1884e7a8d63293a454ca774 
48ab507e206741c4ba304efaf5209963 - - -] create failed: No details.

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource Traceback (most 
recent call last):

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/api/v2/resource.py",
 line 79, in resource

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource result = 
method(request=request, **args)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/api/v2/base.py",
 line 430, in create

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 88, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 84, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py", 
line 151, in wrapper

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_db/api.py", 
line 139, in wrapper

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/neutron/db/api.py",
 line 124, in wrapped

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
traceback.format_exc())

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource 
self.force_reraise()

2017-01-22 12:21:51.569 14448 ERROR neutron.api.v2.resource   File 
"/openstack/venvs/neutron-14.0.3/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise

2017-01-22 12:21:51

[openstack-dev] [octavia] Nominating German Eichberger for Octavia core reviewer

2017-01-20 Thread Michael Johnson

Hello Octavia Cores,

I would like to nominate German Eichberger (xgerman) for reinstatement as an
Octavia core reviewer.

German was previously a core reviewer for Octavia and neutron-lbaas as well
as a former co-PTL for Octavia.  Work dynamics required him to step away
from the project for a period of time, but now he has moved back into a
position that allows him to contribute to Octavia.  His review numbers are
back in line with other core reviewers [1] and I feel he would be a solid
asset to the core reviewing team.

Current Octavia cores, please respond with your +1 vote or an objections.

Michael

[1] http://stackalytics.com/report/contribution/octavia-group/90


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc]

2017-01-19 Thread Michael Gale
Hello,

Are there updated install docs for sfc? The only install steps for a
testbed I can find are here and they seem outdated:
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining

Also from the conference videos there seems to be some Horizon menu /
screens that are available?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstacksdk and compute limits for projects

2017-01-17 Thread Michael Gale
Hello,

Does anyone know what the equivalent of the following command would be
via the API?
`openstack limits show --absolute --project `

I am using an admin account to pull stats and information from a Mitaka
environment, now I can run the above command in bash, looping over each
project that exist. However I would like to get the information using the
openstacksdk via Python.

I can use:
`connection.compute.get_limits()`
 however that only works for the project I logged in with.
Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ocatvia]Newton Octavia multinode setup

2017-01-13 Thread Michael Johnson
Hi Santhosh,

 

Currently there is not an OpenStack Ansible (OSA) role for Octavia, but one is 
under development now.  Keep an eye on the OSA project for updates.

 

Michael

 

From: Santhosh Fernandes [mailto:santhosh.fernan...@gmail.com] 
Sent: Thursday, January 12, 2017 10:13 PM
To: openstack-dev@lists.openstack.org; johnso...@gmail.com
Subject: [openstack-dev][ocatvia]Newton Octavia multinode setup

 

Hi all,

 

Is there any documentation or ansible playbook to install octavia on multi-node 
or all-in-one setup?

I am trying to setup in my lab but not able to find any documentation. 

 

 

Thanks,

Santhosh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Confusion around the complexity

2017-01-12 Thread Michael Johnson
Actually we have a single call create for load balancers[1], so I think that
addresses Josh's concern about complexity in the number of required calls.
As for the complexity of the "concept" of a load balancer, I think we have
improved that greatly with the LBaaSv2 API.

That said, if there are things that jump out at you for usability/complexity
please open a bug for us[2].  We welcome the input!

Michael

P.S. Yes, single call create is not in the main API docs, don't ask, yes we
are working on that.

[1]
http://docs.openstack.org/developer/octavia/api/octaviaapi.html#create-fully
-populated-load-balancer
[2] https://bugs.launchpad.net/octavia/+filebug

-Original Message-
From: Curtis [mailto:serverasc...@gmail.com] 
Sent: Thursday, January 12, 2017 3:30 PM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron] Confusion around the complexity

On Thu, Jan 12, 2017 at 3:46 PM, Joshua Harlow <harlo...@fastmail.com>
wrote:
> So I don't want to start to much of a flame-war and am really just 
> trying to understand things that may be beyond me (so treat me nicely,
ha).
>
> The basic question that I've been wondering revolves around the 
> following kind of 'thought experiment' that asks something along the lines
of:
>
> """
> If I am a user of openstack, say I'm an iphone developer, trying to 
> get my 'game' and associated 'game APIs' setup in a manner that is HA 
> (say fronted by a load-balancer), using my custom image, secure and 
> visible to either an intranet or to the large internet then what is 
> the steps I would have to do when interacting with openstack to 
> accomplish this and what would the provider of openstack have to give to
me as endpoints to make this possible.
> """
>

Presumably this is a public OpenStack cloud? If so...

It's been a while since I worked at a public OpenStack cloud, but most I
would imagine will auto create a tenant network and router (if they can
afford the public IPv4s for the router :)) and then when a user creates an
instance it just ends up on that initial "default" tenant network. This is
usually left to the public cloud to implement during their customer
on-boarding process. That is assuming the public cloud is allowing tenant
"private" networks, which not all would do. There are other models.

Now, a load balancer, if that is required, is different and bit harder if
you mean one that is managed by the OpenStack cloud, as opposed to a user
creating their own LB instance.

Perhaps what you are really thinking about is the simplicity of a more "VPS"
like interface, ala Digital Ocean (and now somewhat mimicked by AWS with uh,
LightSail I think). I've always thought it would perhaps be a nice project
in OpenStack to do a simple VPS style interface.

Thanks,
Curtis.

> One of the obvious ones is nova and glance, and the API and usage 
> there feels pretty straightforward as is (isn't really relevant to 
> this conversation anyway). The one that feels bulky and confusing (at 
> least for
> me) is the things I'd have to do in neutron to create and/or select 
> networks, create and/or select subnets, create and/or select ports and 
> so-on...
>
> As a supposed iphone developer (dev/ops, yadayada) just trying to get 
> his/her game to market why would I really want to know about selecting 
> networks, create and/or selecting subnets, create and/or selecting 
> ports and so-on...
>
> It may just be how it is, but I'd like to at least ask if others are 
> really happy with the interactions/steps (I guess we could/maybe we 
> should ask similar questions around various other projects as well?); 
> if I'm just an outlier that's ok, at least I asked :-P
>
> -Josh
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Blog: serverascode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Which service is using port 8778?

2017-01-10 Thread Michael Davies
On Tue, Dec 20, 2016 at 4:46 PM, Ghanshyam Mann <
ghanshyam.m...@nectechnologies.in> wrote:
[snip]

> But OpenStack port used by services are maintained here[3], may be it will
> be good for each project to add their port in this list.
>
[snip]

> ..[3] http://docs.openstack.org/newton/config-reference/
> firewalls-default-ports.html


I know this thread has moved on, but I'm not sure a list of default ports
for a firewall is the right place to be documenting this.

If there are admin services that perhaps should not, by default, be exposed
publicly - then they shouldn't be listed in such a table.  A simple
implementation might be to expose all of these, which would not be the most
secure default.

Perhaps the equivalent of /etc/services or
http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml
specifically for OpenStack might be better.

Hope this helps,

Michael...
-- 
Michael Davies   mich...@the-davies.net
Rackspace Cloud Builders Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] python 3 tests hate my exception handling

2017-01-04 Thread Michael Still
Ok, I think at this point I'll propose a tweak to keystoneauth to make this
easier, and then refactor my nova code around that.

Thanks for the help everyone.

Hugs and kisses,
Michael




On Wed, Jan 4, 2017 at 4:52 PM, Morgan Fainberg <morgan.fainb...@gmail.com>
wrote:

>
>
> On Jan 3, 2017 19:29, "Matt Riedemann" <mrie...@linux.vnet.ibm.com> wrote:
>
> On 1/3/2017 8:48 PM, Michael Still wrote:
>
>> So...
>>
>> Our python3 tests hate [1] my exception handling for continued
>> vendordata implementation [2].
>>
>> Basically, it goes a bit like this -- I need to move from using requests
>> to keystoneauth1 for external vendordata requests. This is because we're
>> adding support for sending keystone headers with the request so that the
>> external service can verify that it is nova talking. That bit isn't too
>> hard.
>>
>> However, keystoneauth1 uses different exceptions to report errors.
>> Conveniently, it has variables which list all of the connection and http
>> exceptions which it might raise. Inconveniently, they're listed as
>> strings, so I have to construct a list of them like this:
>>
>> # NOTE(mikal): keystoneauth makes me jump through hoops to get these
>> # exceptions, which are listed as strings. Mutter.
>> KEYSTONEAUTH_EXCEPTIONS = [TypeError, ValueError]
>> for excname in ks_exceptions.connection.__all__ +
>> ks_exceptions.http.__all__:
>> KEYSTONEAUTH_EXCEPTIONS.append(getattr(ks_exceptions, excname))
>>
>> Then when it comes time to catch exceptions from keystoneauth1, we can
>> just do this thing:
>>
>> except tuple(KEYSTONEAUTH_EXCEPTIONS) as e:
>> LOG.warning(_LW('Error from dynamic vendordata service '
>> '%(service_name)s at %(url)s: %(error)s'),
>> {'service_name': service_name,
>>  'url': url,
>>  'error': e},
>> instance=self.instance)
>> return {}
>>
>> Which might be a bit horrible, but is nice in that if keystoneauth1 adds
>> new connection or http exceptions, we get to catch them for free.
>>
>> This all works and is tested. However, it causes the py3 tests to fail
>> with this exception:
>>
>> 'TypeError: catching classes that do not inherit from BaseException is
>> not allowed'
>>
>> Which is bemusing to me because I'm not very smart.
>>
>> So, could someone smarter than me please look at [1] and tell me why I
>> get [2] and how to not get that thing? Answers involving manually
>> listing many exceptions will result in me making a sad face and
>> sarcastic comment in the code, so something more elegant than that would
>> be nice.
>>
>> Discuss.
>>
>> Thanks,
>> Michael
>>
>>
>> 1: http://logs.openstack.org/91/416391/1/check/gate-nova-python
>> 35-db/7835df3/console.html#_2017-01-04_01_10_35_520409
>> 2: https://review.openstack.org/#/c/415597/3/nova/api/metadata/
>> vendordata_dynamic.py
>>
>> --
>> Rackspace Australia
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> My first question is, does the KSA team consider the 'connection' and
> 'http' variables as public / contractual to the KSA API in the library? If
> not, they could change/remove those and break nova which wouldn't be cool.
>
>
> For what it is worth, Keystoneauth has been built very carefully so that
> everything that is public should be public (not prefixed with "_"), short
> of a massive security issue, we will not change/break an interface that is
> public (even not intentionally public); we may deprecated and warn if we
> don't want you to use the interface, but it will remain.
>
> The only time a public interface will be removed from KSA will be if we
> move to "keystoneauth2". In short, connection and HTTP variables are public
> today and will remain so (even if it was unintentional).
>
>
> For what it's worth, this is what we handle when making requests to the
> placement service using KSA:
>
> https://github.com/openstack/nova/blob/34f4b1bd68d6011da76e6
> 8c4ddae9f28e37eed9a/nova/scheduler/client/report.py#L37
>
> If nothing else, maybe that's all you'd need?
>
> Another alternative is building whatever you

[openstack-dev] [Nova] python 3 tests hate my exception handling

2017-01-03 Thread Michael Still
So...

Our python3 tests hate [1] my exception handling for continued vendordata
implementation [2].

Basically, it goes a bit like this -- I need to move from using requests to
keystoneauth1 for external vendordata requests. This is because we're
adding support for sending keystone headers with the request so that the
external service can verify that it is nova talking. That bit isn't too
hard.

However, keystoneauth1 uses different exceptions to report errors.
Conveniently, it has variables which list all of the connection and http
exceptions which it might raise. Inconveniently, they're listed as strings,
so I have to construct a list of them like this:

# NOTE(mikal): keystoneauth makes me jump through hoops to get these
# exceptions, which are listed as strings. Mutter.
KEYSTONEAUTH_EXCEPTIONS = [TypeError, ValueError]
for excname in ks_exceptions.connection.__all__ +
ks_exceptions.http.__all__:
KEYSTONEAUTH_EXCEPTIONS.append(getattr(ks_exceptions, excname))

Then when it comes time to catch exceptions from keystoneauth1, we can just
do this thing:

except tuple(KEYSTONEAUTH_EXCEPTIONS) as e:
LOG.warning(_LW('Error from dynamic vendordata service '
'%(service_name)s at %(url)s: %(error)s'),
{'service_name': service_name,
 'url': url,
 'error': e},
instance=self.instance)
return {}

Which might be a bit horrible, but is nice in that if keystoneauth1 adds
new connection or http exceptions, we get to catch them for free.

This all works and is tested. However, it causes the py3 tests to fail with
this exception:

'TypeError: catching classes that do not inherit from BaseException is not
allowed'

Which is bemusing to me because I'm not very smart.

So, could someone smarter than me please look at [1] and tell me why I get
[2] and how to not get that thing? Answers involving manually listing many
exceptions will result in me making a sad face and sarcastic comment in the
code, so something more elegant than that would be nice.

Discuss.

Thanks,
Michael


1:
http://logs.openstack.org/91/416391/1/check/gate-nova-python35-db/7835df3/console.html#_2017-01-04_01_10_35_520409
2:
https://review.openstack.org/#/c/415597/3/nova/api/metadata/vendordata_dynamic.py

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][nova-docker] Time to retire nova-docker?

2016-12-29 Thread Michael Still
I'd be remiss if I didn't point out that the nova LXC driver is much better
supported than the nova-docker driver.

Michael

On Thu, Dec 29, 2016 at 8:01 PM, Esra Celik <celik.e...@tubitak.gov.tr>
wrote:

>
> Hi Sam,
>
> nova-lxc is not recommended in production [1]. And LXD is built on top of
> LXC AFAIK. But I will investigate nova-lxd in detail, thank you.
> If nova-docker will be retired at the end of the day, we will need to
> choose a similar service.
>
> [1] http://docs.openstack.org/newton/config-reference/
> compute/hypervisor-lxc.html
>
> ecelik
>
> --
>
> *Kimden: *"Sam Stoelinga" <sammiest...@gmail.com>
> *Kime: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Gönderilenler: *29 Aralık Perşembe 2016 0:13:22
>
> *Konu: *Re: [openstack-dev] [nova][nova-docker] Time to retire
> nova-docker?
>
> Esra,
>
> Not sure what's your use case, but I would also take a look at nova LXC
> driver. It looks like you are treating your Containers as VMs and for that
> I would say the nova lxc driver is a better fit. [1] Docker has specific
> requirements on images and networking, which doesn't fit well in the nova
> model imo.
>
> There is also a new hypervisor LXD which aims to treat containers as VMs
> as well. [2]
>
> [1] http://docs.openstack.org/developer/nova/support-matrix.html
> [2] https://linuxcontainers.org/lxd/introduction/
>
> Regards,
> Sam Stoelinga
>
> On Mon, Dec 26, 2016 at 10:38 AM, Esra Celik <celik.e...@tubitak.gov.tr>
> wrote:
>
>>
>> Hi Jay, I was asking because our discussions to contribute to
>> nova-docker project ran across the discussions here to retire the project :)
>>
>> Hongbin, that is exactly what I meant. Using nova-docker it deploys
>> containers to physical machines, not virtual machines.
>> Using Ironic driver with Magnum is a solution, but I guess every time
>> creating a cluster with Magnum it will redeploy the operating system for
>> the selected physical machine, which is not necessary.
>> I will investigate Zun project more, thank you very much. What would you
>> say for its current maturity level?
>>
>>
>>
>> --
>>
>> *Kimden: *"Hongbin Lu" <hongbin...@gmail.com>
>> *Kime: *"OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> *Gönderilenler: *26 Aralık Pazartesi 2016 17:53:00
>> *Konu: *Re: [openstack-dev] [nova][nova-docker] Time to retire
>> nova-docker?
>>
>> I guess "extra virtualization layer" means Magnum provisions a Container
>> Orchestration Engines (COE) on top of nova instances. If the nova instances
>> are virtual machines, there is a "extra virtualization layer".
>>
>> I think you could consider using Magnum with Ironic driver. If the driver
>> is Ironic, COEs are deployed to nova instances that are physical machines
>> provided by Ironic. Zun project [1] could be another option for your use
>> case. Zun is similar to nova-docker, which enables running containers on
>> compute hosts. You could find a thoughtful introduction here [2].
>>
>> [1] https://wiki.openstack.org/wiki/Zun
>> [2] http://www.slideshare.net/hongbin034/zun-presentation-
>> openstack-barcelona-summit
>>
>> Best regards,
>> Hongbin
>>
>> On Mon, Dec 26, 2016 at 8:23 AM, Jay Pipes <jaypi...@gmail.com> wrote:
>>
>>> On 12/26/2016 08:23 AM, Esra Celik wrote:
>>>
>>>> Hi All,
>>>>
>>>> It is very sad to hear nova-docker's retirement. Me and my team (3) are
>>>> working for a cloud computing laboratory and we were very keen on
>>>> working with nova-docker.
>>>> After some research about its current state I saw these mails. Will you
>>>> actually propose another equivalent to nova-docker or is it just the
>>>> lack of contributors to this project?
>>>> Some of the contributors previously advised us the magnum project
>>>> instead of nova-docker, however it does not satisfy our needs because of
>>>> the additional virtualization layer it needs.
>>>> If the main problem is the lack of contributors we may participate in
>>>> this project.
>>>>
>>> There's never any need to ask permission to contribute to a project :)
>>> If nova-docker driver is something you cannot do without, feel free to
>>> contribu

[openstack-dev] Retire the radar project?

2016-12-21 Thread Michael Still
Hi,

radar was an antique effort to import some outside-OpenStack code that did
CI reliability dashboarding. It was never really a thing, and has been
abandoned over time.

The last commit that wasn't part of a project wide change series was in
January 2015.

Does anyone object to me following the project removal steps described at
http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

Thanks,
Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Where will Neutron go in future?

2016-12-20 Thread Michael Johnson
Hi Zhi,

LBaaSv2 as an API will live on.

We are in the process of merging that API into the octavia repository
to merge the two load balancing projects into one and remove our
dependency on the neutron API process/endpoint.

Functionally it is our goal to allow the LBaaSv2 API to continue to
function for users.  For some period of time we will maintain a
pass-through proxy so that calls to the neutron API endpoint will
continue to function as they do today.  In addition, we will be
advertising the octavia endpoint and it will be compatible with the
current LBaaSv2 API.  Over time users can switch the endpoint they use
for LBaaSv2 calls from the neutron endpoint and they will continue to
operate as expected.

As part of this compatibility, the current LBaaSv2 drivers will move
behind the octavia API process as opposed to the current neutron API
process.  The legacy haproxy-namespace driver and the octavia (haproxy
based as well) driver will continue to exist for some time, though we
would like to deprecate the legacy haproxy-namespace driver.

Given the progress we have made up to Ocata-2, I expect Ocata will
release with the same configuration as Newton.  We will have the
LBaaSv2 API in place in octavia, but the driver and pass through work
will not be complete in time.  This means you will continue to use the
neutron endpoint to access neutron-lbaas drivers as you do today.

Michael

On Sun, Dec 18, 2016 at 6:52 PM, zhi <changzhi1...@gmail.com> wrote:
> Deal all.
>
> I have some questions about what will Neutron does in next releases.
>
> As far as I know, LBaaSv2 will be deprecated in next 2 releases, maybe P
> release, we will not see LBaaSv2 anymore, isn't it? Instead of LBaaSv2(
> HAProxy driver based ), Octavia will be the only LBaaS solution, isn't it?
>
> What's about namespace based L3 router? Do we have any ideas about NFV
> solution in L3 router just like Octavia?
>
> Finally, where will Neutron *aaS go in future? Now, vpnaas was not part of
> neutron governance. What about fwaas? Do we deprecated it in next releases?
>
> I wish someone could give some exact words about these. I will thanks a lot.
> :)
>
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] devstack with neutron, OVS and sfc?

2016-12-14 Thread Michael Gale
Hello,

I used posted this on the openstack user list but it might be a better
question for this group.

Does anyone know if neutron-sfc is working in devstack?

I started to follow the instructions here:
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining#
Single_Host_networking-sfc_installation_steps_and_testbed_setup

I ended up with a working devstack instance using:
- Ubuntu 16.04
- newton stable branches for devstack and networking-sfc
- I set an environment var to disable OVS recompile since 16.04 comes with
OVS 2.5.0 and the recompil was failing during the build.

I could build VM's, networks and I believe I setup a sfc implementation
correctly (portpairs, portgroups, portclassification, etc). I created a
ServiceVM on the same internal network as my source VM and used an neutron
router to access the outside world. I tried to route all outbound traffic
on port 80 through my ServiceVM.

The issue I ran into was that my ServiceVM would only see the initial
outbound SYN's after that the return traffic and data packets would always
go between the source VM and the external web server only.

>From the different test scenarios I ran, I could always see the initial
outbound SYN packets however it always seems that the neutron router would
route the return packets back via the normal routing rules and ignore my
sfc setup

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Meeting time and cancellations

2016-12-14 Thread Michael Johnson
Hello Octavia folks!

I wanted to remind folks to enter their timezone information in the
etherpad to help us come up with a meeting time that works for
everyone.  Please do so here:
https://etherpad.openstack.org/p/octavia-weekly-meeting-time

At the octavia IRC meeting today we agreed to take a break from our
weekly IRC meetings for the next two weeks.  Many of us are taking
some vacation time.

Octavia IRC meetings will resume on January 4th.

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Re: [infra][neutron] RADWARE CI voting permission in Neutron

2016-12-14 Thread Michael Johnson
I have talked with Izik on IRC and have started work to get this fixed.

I will be setting up the octavia-ci group and fixing the permissions.

Michael

On Wed, Dec 14, 2016 at 12:49 AM, Ihar Hrachyshka <ihrac...@redhat.com> wrote:
> Izik Penso <itz...@radware.com> wrote:
>
>> Hi Neutron, Infra Cores,
>>
>> Radware CI  would like to acquire voting (+/-1 Verified) permission.
>>
>> We voted before but we had to install a new CI and also changed our old
>> gerrit user with a new user because ssh key issues.
>>
>> We are triggered by neutron-lbaas changes.
>> https://wiki.openstack.org/wiki/ThirdPartySystems/Radware_CI
>>
>> Old user: radware3rdpartytesting
>> New user: radware3rdparty
>> email : openstack3rdpartytest...@radware.com
>>
>> See our comments :
>> https://review.openstack.org/#/c/343963/
>> https://review.openstack.org/#/c/28/
>> https://review.openstack.org/#/c/408534/
>> https://review.openstack.org/#/c/408105/
>
>
> So the CI is for LBaaS repo only? Then I believe you should talk to Octavia
> team that now maintains neutron-lbaas. Adding [octavia] to the topic.
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver based LBaaSv2

2016-12-07 Thread Michael Johnson
Lubosz,

I would word that very differently.  We are not dropping LBaaSv2
support.  It is not going away.  I don't want there to be confusion on
this point.

We are however, moving/merging the API from neutron into Octavia.
So, during this work the code will be transitioning repositories and
you will need to carefully synchronize and/or manage the changes in
both places.
Currently the API changes have patchsets up in the Octavia repository.
However, the old namespace driver has not yet been migrated over.

Michael


On Tue, Dec 6, 2016 at 8:46 AM, Kosnik, Lubosz <lubosz.kos...@intel.com> wrote:
> Hello Zhi,
> So currently we’re working on dropping LBasSv2 support.
> Octavia is a big-tent project providing lbass in OpenStack and after merging
> LBasS v2 API in Octavia we will deprecate that project and in next 2
> releases we’re planning to completely wipe out that code repository. If you
> would like to help with LBasS in OpenStack you’re more than welcome to start
> working with us on Octavia.
>
> Cheers,
> Lubosz Kosnik
> Cloud Software Engineer OSIC
>
> On Dec 6, 2016, at 6:04 AM, Gary Kotton <gkot...@vmware.com> wrote:
>
> Hi,
> I think that there is a move to Octavia. I suggest reaching out to that
> community and see how these changes can be added. Sounds like a nice
> addition
> Thanks
> Gary
>
> From: zhi <changzhi1...@gmail.com>
> Reply-To: OpenStack List <openstack-dev@lists.openstack.org>
> Date: Tuesday, December 6, 2016 at 11:06 AM
> To: OpenStack List <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver
> based LBaaSv2
>
> Hi, all
>
> I am considering add some new extensions for HAProxy driver based Neutron
> LBaaSv2.
>
> Extension 1, multi subprocesses supported. By following this document[1], I
> think we can let our HAProxy based LBaaSv2 support this feature. By adding
> this feature, we can enhance loadbalancers performance.
>
> Extension 2, http keep-alive supported. By following this document[2], we
> can make our loadbalancers more effective.
>
>
> Any comments are welcome!
>
> Thanks
> Zhi Chang
>
>
> [1]: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#cpu-map
> [2]:
> http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#option%20http-keep-alive
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Stephen Finucane for nova-core

2016-12-03 Thread Michael Still
+1, I'd value him on the team.

Michael

On Sat, Dec 3, 2016 at 2:22 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> I'm proposing that we add Stephen Finucane to the nova-core team. Stephen
> has been involved with nova for at least around a year now, maybe longer,
> my ability to tell time in nova has gotten fuzzy over the years.
> Regardless, he's always been eager to contribute and over the last several
> months has done a lot of reviews, as can be seen here:
>
> https://review.openstack.org/#/q/reviewer:sfinucan%2540redhat.com
>
> http://stackalytics.com/report/contribution/nova/180
>
> Stephen has been a main contributor and mover for the config option
> cleanup series that last few cycles, and he's a go-to person for a lot of
> the NFV/performance features in Nova like NUMA, CPU pinning, huge pages,
> etc.
>
> I think Stephen does quality reviews, leaves thoughtful comments, knows
> when to hold a +1 for a patch that needs work, and when to hold a -1 from a
> patch that just has some nits, and helps others in the project move their
> changes forward, which are all qualities I look for in a nova-core member.
>
> I'd like to see Stephen get a bit more vocal / visible, but we all handle
> that differently and I think it's something Stephen can grow into the more
> involved he is.
>
> So with all that said, I need a vote from the core team on this
> nomination. I honestly don't care to look up the rules too much on number
> of votes or timeline, I think it's pretty obvious once the replies roll in
> which way this goes.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][octavia] Neutron LBaaS governance change and Octavia to the big tent

2016-12-01 Thread Michael Johnson
There are a lot more than just those three (which are under packaging
and not neutron btw).

I will start working on moving/scrubbing the bugs today.

Michael

On Thu, Dec 1, 2016 at 7:30 AM, Brian Haley <brian.ha...@hpe.com> wrote:
> On 12/01/2016 08:54 AM, Ihar Hrachyshka wrote:
>>
>> Armando M. <arma...@gmail.com> wrote:
>>
>>> Hi folks,
>>>
>>> A few hours ago a governance change [1] has been approved by TC members.
>>> This
>>> means that from now on, the efforts for Load Balancing as a Service
>>> efforts
>>> rest officially in the hands of the Octavia PTL and the Octavia core
>>> team.
>>>
>>> I will work with the help of the respective core teams to implement a
>>> smooth
>>> transition. My suggestion at this point is for any ML communication that
>>> pertain LBaaS issues to include [octavia] tag on the email subject.
>>>
>>> Please do not hesitate to reach out for any questions and/or
>>> clarifications.
>>>
>>> Cheers,
>>> Armando
>>>
>>> [1] https://review.openstack.org/#/c/313056/
>>
>>
>> Should we also move all neutron lbaas-tagged bugs to octavia in LP? And
>> kill the
>> ‘lbaas’ tag from doc/source/policies/bugs.rst?
>
>
> Yes.  I added the lbaas bugs.rst tag removal to
> https://review.openstack.org/#/c/404872/ already.  Can someone from Octavia
> close and/or move-over any remaining lbaas bugs?  I only see three at
> https://bugs.launchpad.net/ubuntu/+source/neutron-lbaas
>
> Thanks,
>
> -Brian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Exposing project team's metadata in README files

2016-11-30 Thread Michael Johnson
Hi Flavio,

These tags don't seem to be rendering/laying out well for octavia:
https://github.com/openstack/octavia/blob/master/README.rst

Any pointers to get this corrected or is this part of the backend
rendering work you mentioned in the keystone message above?

Michael

On Wed, Nov 30, 2016 at 1:34 AM, Flavio Percoco <fla...@redhat.com> wrote:
> On 25/11/16 13:46 +, Amrith Kumar wrote:
>>
>> Flavio,
>>
>> I see a number of patches[1] which have been landed on this project but I
>> find
>> that at least the ones that were landed for Trove, and a random sampling
>> of
>> the others all to be different from what you proposed below[2] in one
>> important aspect.
>>
>> In [2] you proposed a structure where the title of the document; or the
>> first,
>> and most prominent heading, would be the existing heading of the document,
>> and
>> the tags would be below that. In [2] for example, that was:
>>
>> "kombu - Messaging library for Python"
>>
>> and the tags would be in smaller font below that.
>
>
> Hi,
>
> Some fixes landed yesterday to improve the badges layout. For those
> interested,
> here's an example of what it looks like now:
>
> https://github.com/openstack/keystone
>
> Basically, the horizontal padding was reduced to the minimum needed and the
> badges width was set to the total width of the image.
>
> Hope this helps,
> Flavio
>
>
>> What I see in [3] the patch for Trove and the proposed example [4] is:
>>
>> "Team and repository tags" as the first, and most conspicuous header, and
>> the
>> header "Trove" below that.
>>
>> In some cases the second header is the same font as the "Team and
>> repository
>> tags" header.
>>
>> I think this change (these 124 changes) as proposed are not consistent
>> with
>> the proposal you made below, and certainly seem to be less suitable than
>> that
>> proposal. The end product for the four trove repositories [4], [5], [6],
>> and
>> [7]
>>
>> I think we should have a discussion on the ML whether we feel that this
>> new
>> structure is the appropriate one, and before some projects approve these
>> changes and others don't that these be all marked WF-1.
>>
>> Thanks,
>>
>> -amrith
>>
>> [1] https://review.openstack.org/#/q/topic:project-badges
>> [2] https://github.com/celery/kombu/blob/master/README.rst
>> [3] https://review.openstack.org/#/c/402547/
>> [4] https://gist.github.com/anonymous/4ccf1cc6e531bb50e78cb4d64dfe1065
>> [5] https://gist.github.com/1f38def1c65c733b7e4cec3d07399e99
>> [6] https://gist.github.com/2f1c6e9b800db6d4a49d46f5b0623c1d
>> [7] https://gist.github.com/9e9e2e2ba4ecfdece7827082114f8258
>>
>>
>>
>>
>>> -Original Message-
>>> From: Flavio Percoco [mailto:fla...@redhat.com]
>>> Sent: Thursday, October 13, 2016 7:07 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> <openstack-dev@lists.openstack.org>
>>> Subject: Re: [openstack-dev] [all][tc] Exposing project team's metadata
>>> in
>>> README files
>>>
>>> On 12/10/16 11:01 -0400, Doug Hellmann wrote:
>>> >Excerpts from Flavio Percoco's message of 2016-10-12 14:50:03 +0200:
>>> >> Greetings,
>>> >>
>>> >> One of the common complains about the existing project organization
>>> >> in the big tent is that it's difficult to wrap our heads around the
>>> >> many projects there are, their current state (in/out the big tent),
>>> >> their
>>> tags, etc.
>>> >>
>>> >> This information is available on the governance website[0]. Each
>>> >> official project team has a page there containing the information
>>> >> related to the deliverables managed by that team. Unfortunately, I
>>> >> don't think this page is checked often enough and I believe it's not
>>> >> known
>>> by everyone.
>>> >>
>>> >> In the hope that we can make this information clearer to people
>>> >> browsing the many repos (most likely on github), I'd like to propose
>>> >> that we include the information of each deliverable in the readme
>>> >> file. This information would be rendered along with the rest of the
>>> >> readme (at least on Github, which might not be our main repo but it's
>>> >> the
>>> place most humans go to to check our

Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-29 Thread Michael Johnson
Hi Wanjing,

1. Yes, active/standby uses VRRP combined with our stickyness table
sync between the amphora.  However, since some clouds have had issues
with multicast we have opted to use the unicast mode between the
amphora instances.
2. This is a good question that we have been working on.  Currently we
do not have a working solution for containers or bare metal, but we
would like to.  We are close with nova-lxd, but we have hit some bugs.
Likewise with bare metal, I would expect we could integrate with
ironic pretty easily, it just hasn't been something the team has
worked on yet.  This is an area the project needs more work/support.

Michael

On Mon, Nov 28, 2016 at 3:45 PM, Wanjing Xu (waxu) <w...@cisco.com> wrote:
> Thanks Michael
>
> I still have the following questions:
> 1)  For active-standby, do the amphorae VM pair really communicate with each 
> other using vrrp protocol, like using multicast vrrp IP?
> 2)   How do we install/configure the Octavia so that amphorae instances are 
> spun as containers or on bare metal?
>
> Thanks!
>
> Wanjing
>
> On 11/10/16, 5:12 PM, "Michael Johnson" <johnso...@gmail.com> wrote:
>
> Hi Wanjing,
>
> Yes, active/standby is available in Mitaka.  You must enable it via
> the octavia.conf file.
>
> As for benchmarking, there has been some work done in this space (see
> the octavia meeting logs last month), but it varies greatly depending
> on how your cloud is configured and/or the hardware it is on.
>
> Michael
>
>     On Thu, Nov 10, 2016 at 3:18 PM, Wanjing Xu (waxu) <w...@cisco.com> wrote:
> > Thanks, Michael.  Now I have brought up this octavia.  I have a 
> question:
> > Is HA supported on octavia, or is it yet to come?  I am using
> > stable/mitaka and I only see one amphorae vm launched per loadbalancer.
> > And did anybody benchmark this octtavia against vender box?
> >
> > Regards!
> >
> > Wanjing
> >
> > On 11/7/16, 10:02 AM, "Michael Johnson" <johnso...@gmail.com> wrote:
> >
> >>Hi Wanjing,
> >>
> >>You are not seeing the network interfaces for the VIP and member
> >>networks because they are inside a network namespace for security
> >>reasons.  You can see these by issuing "sudo ip netns exec
> >>amphora-haproxy ifconfig -a".
> >>
> >>I'm not sure what version of octavia and neutron you are using, but
> >>the issue you noted about "dns_name" was fixed here:
> >>https://review.openstack.org/#/c/337939/
> >>
> >>Michael
> >>
> >>
> >>On Thu, Nov 3, 2016 at 11:29 AM, Wanjing Xu (waxu) <w...@cisco.com> 
> wrote:
> >>> Going through the log , I saw the following error on o-hm
> >>>
> >>> 2016-11-03 03:31:06.441 19560 ERROR
> >>> octavia.controller.worker.controller_worker 
> request_ids=request_ids)
> >>> 2016-11-03 03:31:06.441 19560 ERROR
> >>> octavia.controller.worker.controller_worker BadRequest: Unrecognized
> >>> attribute(s) 'dns_name'
> >>> 2016-11-03 03:31:06.441 19560 ERROR
> >>> octavia.controller.worker.controller_worker Neutron server returns
> >>> request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']
> >>>
> >>> And it seemed that I need to upgrade my neutron client.  While I am
> >>>planning
> >>> to do it, could somebody please send me the document on how this vip 
> is
> >>> supposed to plug into the lbaas vm and what the failover is about ?
> >>>
> >>> Thanks!
> >>> Wanjing
> >>>
> >>>
> >>> From: Cisco Employee <w...@cisco.com>
> >>> Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)"
> >>> <openstack-dev@lists.openstack.org>
> >>> Date: Wednesday, November 2, 2016 at 7:04 PM
> >>> To: "OpenStack Development Mailing List (not for usage questions)"
> >>> <openstack-dev@lists.openstack.org>
> >>> Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able
> >>>to
> >>> ping loadbalancer ip
> >>>
> >>> So I bring up octavia using devstack (stable/mitaka).   I created a
> >>> loadbalander and a listener(not create member yet) and start to look 
> at
> >>

Re: [openstack-dev] [new][nimble] New project: Nimble

2016-11-27 Thread Michael Still
On Mon, Nov 28, 2016 at 4:37 PM, Jay Pipes <jaypi...@gmail.com> wrote:

[Snip]

>
> I don't see any compelling reason not to work with the Nova and Ironic
> projects and add the functionality you wish to see in those respective
> projects.
>

Jay, I agree and I don't. First off, I think improving our current projects
is a better engineering choice.

That said, there seems to be a repeated meme that splitting our efforts and
having more than one implementation of compute will somehow solve all our
problems, and I embrace that experiment. It seems very unlikely to me that
we'll end up in a happy place at the end, but then again, I've been wrong
before.

So I say have at it, so long as the outcome of the experiment is public.

Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About running Neutron LBaaS

2016-11-21 Thread Michael Johnson
Hi Yipei,

That error means the controller worker process was not able to reach
the amphora REST API.

I am guessing this is the issue with diskimage-builder which we have
patches up for, but not all of them have merged yet [1][2].

Try running my script:
https://gist.github.com/michjohn/a7cd582fc19e0b4bc894eea6249829f9 to
rebuild the image and boot another amphora.

Also, could you provide a link to the docs you used that booted the
web servers on the lb-mgmt-lan?  I want to make sure we update that
and clarify for future users.

Michael

[1] https://review.openstack.org/399272
[2] https://review.openstack.org/399276

On Sat, Nov 19, 2016 at 9:46 PM, Yipei Niu <newy...@gmail.com> wrote:
> Hi, Micheal,
>
> Thanks a lot for your comments.
>
> Please find the errors of o-cw.log in link
> http://paste.openstack.org/show/589806/. Hope it will help.
>
> About the lb-mgmt-net, I just follow the guide of running LBaaS. If I create
> a ordinary subnet with neutron for the two VMs, will it prevent the issue
> you mentioned happening?
>
> Best regards,
> Yipei
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia] About playing Neutron LBaaS

2016-11-18 Thread Michael Johnson
Hi Yipei,

A note, that you probably want to use the tags [neutron-lbaas] and
[octavia] instead of [tricicle] to catch the LBaaS team attention.

Since you are using the octavia driver, can you please include a link
to your o-cw.log?  This will tell us why the load balancer create
failed.

Also, I see that your two servers are on the lb-mgmt-net, this may
cause some problems with the load balancer when you add them as
members.  The lb-mgmt-net is intended to only be used for
communication between the octavia controller processes and the octavia
amphora (service VMs).  Since you didn't get as far as adding members
I'm sure this is not the root cause of the problem you are seeing.
The o-cw log will help us determine the root cause.

Michael


On Thu, Nov 17, 2016 at 11:48 PM, Yipei Niu <newy...@gmail.com> wrote:
> Hi, all,
>
> Recently I try to configure and play Neutron LBaaS in one OpenStack instance
> and have some trouble when creating a load balancer.
>
> I install devstack with neutron networking as well as LBaaS in one VM. The
> detailed configuration of local.conf is pasted in the link
> http://paste.openstack.org/show/589669/.
>
> Then I boot two VMs in the OpenStack instance, which can be reached via ping
> command from the host VM. The detailed information of the two VMs are listed
> in the following table.
>
> +--+-+++-+--+
> | ID   | Name| Status | Task State |
> Power State | Networks |
> +--+-+++-+--+
> | 4cf7527b-05cc-49b7-84f9-3cc0f061be4f | server1 | ACTIVE | -  |
> Running | lb-mgmt-net=192.168.0.6  |
> | bc7384a0-62aa-4987-89b6-8b98a6c467a9 | server2 | ACTIVE | -  |
> Running | lb-mgmt-net=192.168.0.12 |
> +--+-+++-+--+
>
> After building up the environment, I try to create a load balancer based on
> the guide in https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun. When
> executing the command "neutron lbaas-loadbalancer-create --name lb1
> private-subnet", the state of the load balancer remains "PENDING_CREATE" and
> finally becomes "ERROR". I checked q-agt.log and q-svc.log, the detailed
> info is pasted in http://paste.openstack.org/show/589676/.
>
> Look forward to your valuable comments. Thanks a lot!
>
> Best regards,
> Yipei
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [neutron][lbaasv2][octavia] Not able to create loadbalancer

2016-11-17 Thread Michael Johnson
Ganpat,

Great to hear.

FYI, our documentation lives here: http://docs.openstack.org/developer/octavia/

I will note that we have bugs for more documentation to be created,
but there is a good start at the link above.

Michael

On Wed, Nov 16, 2016 at 10:36 PM, Ganpat Agarwal
<gans.develo...@gmail.com> wrote:
> Thanks a lot Michael.
>
> Recreating the amphora image with Ubuntu Trusty solved the issue for me.
>
> We are planning to add octavia on our ansible managed cloud, but could not
> find any concrete documentation.Will give it a try.
>
> Regards,
> Ganpat
>
>
>
> On Wed, Nov 16, 2016 at 10:20 PM, Michael Johnson <johnso...@gmail.com>
> wrote:
>>
>> Hi Ganpat,
>>
>> FYI, we are on freenode IRC: #openstack-lbaas if you would like to
>> chat interactively.
>>
>> So, I see the amp is expecting systemd, which probably means you are
>> using a "master" version of diskimage-builder with a stable/newton
>> version of Octavia.  On November 2nd, they switched diskimage-builder
>> to use a xenial Ubuntu image by default.  This patch just merged on
>> Octavia master to support that change:
>> https://review.openstack.org/396438
>>
>> I think you have two options:
>> 1. Set the environment variable DIB_RELEASE=trusty and recreate the
>> amphora image[1].
>> 2. Install the stable/newton version of diskimage-builder and recreate
>> the amphora image.
>>
>> For option one I have pasted a script I use to rebuild the image with
>> Ubuntu trusty.
>> Note, this script will delete your current image in glance and expects
>> the octavia repository to be located in /opt/stack/octavia, so please
>> update it as needed.
>>
>> Michael
>>
>> [1] https://gist.github.com/michjohn/a7cd582fc19e0b4bc894eea6249829f9
>>
>> On Wed, Nov 16, 2016 at 8:25 AM, Ganpat Agarwal
>> <gans.develo...@gmail.com> wrote:
>> > Here are the steps i followed
>> >
>> > 1. Created a LB
>> >
>> > stack@devstack-openstack:~/devstack$ neutron lbaas-loadbalancer-list
>> >
>> > +--+--+-+-+--+
>> > | id   | name | vip_address |
>> > provisioning_status | provider |
>> >
>> > +--+--+-+-+--+
>> > | 1ffcfe97-99a3-47c1-9df1-63bac71d9e04 | lb1  | 10.0.0.10   |
>> > PENDING_CREATE
>> > | octavia  |
>> >
>> > +--+--+-+-+--+
>> >
>> > 2. List amphora instance
>> > stack@devstack-openstack:~/devstack$ nova list
>> >
>> > +--+--+++-+--+
>> > | ID   | Name
>> > | Status | Task State | Power State | Networks
>> > |
>> >
>> > +--+--+++-+--+
>> > | 89dc06b7-00a9-456f-abc9-50f14e1bc78b |
>> > amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd | ACTIVE | -  |
>> > Running
>> > | lb-mgmt-net=192.168.0.6; private=10.0.0.11,
>> > fdbc:aa5f:a6ae:0:f816:3eff:fe0b:86d7 |
>> >
>> > +--+--+++-+--+
>> >
>> > 3. able to ssh on lb-mgmt-ip , 192.168.0.6
>> >
>> > Network config
>> >
>> > ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ ip a
>> > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>> > group
>> > default qlen 1
>> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> > inet 127.0.0.1/8 scope host lo
>> >valid_lft forever preferred_lft forever
>> > inet6 ::1/128 scope host
>> >valid_lft forever preferred_lft forever
>> > 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast
>> > state
>> > UP group default qlen 1000
>> > link/ether fa:16:3e:02:a7:50 brd ff:ff:ff:ff:ff:ff
>> > inet 192.168.0.6/24 brd

[openstack-dev] [networking-ovn] OVN native gateway workflow

2016-11-17 Thread Michael Kashin
Greetings,
I'm testing OVN integration with RDO Openstack. I can create tenant
networks and attach VMs to them with no issues. However I don't get how the
gateway scheduling works. It seems like whenever I create an external
provider network and attach it to my DLR, OVN should schedule a gateway
router and connect it to my DVR via an special transit network. However I
don't see that happening.

My ML2 OVN settings:
ovn_l3_mode=true
ovn_l3_scheduler=leastloaded
ovn_native_dhcp=true
My workflow (assuming DLR R1 already exists):
neutron net-create EXT-NET --provider:network_type flat
 --provider:physical_network extnet   --router:external --shared
neutron subnet-create --name EXT-SUB --enable_dhcp=False
--allocation-pool=start=169.254.0.50,end=169.254.0.99 --gateway=169.254.0.1
EXT-NET 169.254.0.0/24
neutron router-gateway-set R1 EXT-NET

At the end of this all i see in my northDB is a new LS:
 switch 151ac068-ee99-4324-b785-40709b2e2061 (neutron-b4786af5-cf70-4fc2-
8f36-e9d540165d37)
port provnet-b4786af5-cf70-4fc2-8f36-e9d540165d37
addresses: ["unknown"]
port fb73ca73-488f-40aa-89e1-e8e312de7a77
addresses: ["fa:16:3e:1d:75:66 169.254.0.50"]

I don't see any new GW router or a link between a DLR and GW.
Can someone please explain what the workflow should look like for OVN
native NAT and FIP connectivity?
Cheers,
Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [neutron][lbaasv2][octavia] Not able to create loadbalancer

2016-11-16 Thread Michael Johnson
Hi Ganpat,

FYI, we are on freenode IRC: #openstack-lbaas if you would like to
chat interactively.

So, I see the amp is expecting systemd, which probably means you are
using a "master" version of diskimage-builder with a stable/newton
version of Octavia.  On November 2nd, they switched diskimage-builder
to use a xenial Ubuntu image by default.  This patch just merged on
Octavia master to support that change:
https://review.openstack.org/396438

I think you have two options:
1. Set the environment variable DIB_RELEASE=trusty and recreate the
amphora image[1].
2. Install the stable/newton version of diskimage-builder and recreate
the amphora image.

For option one I have pasted a script I use to rebuild the image with
Ubuntu trusty.
Note, this script will delete your current image in glance and expects
the octavia repository to be located in /opt/stack/octavia, so please
update it as needed.

Michael

[1] https://gist.github.com/michjohn/a7cd582fc19e0b4bc894eea6249829f9

On Wed, Nov 16, 2016 at 8:25 AM, Ganpat Agarwal
<gans.develo...@gmail.com> wrote:
> Here are the steps i followed
>
> 1. Created a LB
>
> stack@devstack-openstack:~/devstack$ neutron lbaas-loadbalancer-list
> +--+--+-+-+--+
> | id   | name | vip_address |
> provisioning_status | provider |
> +--+--+-+-+--+
> | 1ffcfe97-99a3-47c1-9df1-63bac71d9e04 | lb1  | 10.0.0.10   | PENDING_CREATE
> | octavia  |
> +--+--+-+-+--+
>
> 2. List amphora instance
> stack@devstack-openstack:~/devstack$ nova list
> +--+--+++-+--+
> | ID   | Name
> | Status | Task State | Power State | Networks
> |
> +--+--+++-+--+
> | 89dc06b7-00a9-456f-abc9-50f14e1bc78b |
> amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd | ACTIVE | -  | Running
> | lb-mgmt-net=192.168.0.6; private=10.0.0.11,
> fdbc:aa5f:a6ae:0:f816:3eff:fe0b:86d7 |
> +--+--+++-+--+
>
> 3. able to ssh on lb-mgmt-ip , 192.168.0.6
>
> Network config
>
> ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ ip a
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether fa:16:3e:02:a7:50 brd ff:ff:ff:ff:ff:ff
> inet 192.168.0.6/24 brd 192.168.0.255 scope global ens3
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fe02:a750/64 scope link
>valid_lft forever preferred_lft forever
> 3: ens6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default
> qlen 1000
>
>
> 4. No amphora agent running
>
> ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ sudo service
> amphora-agent status
> ● amphora-agent.service
>Loaded: not-found (Reason: No such file or directory)
>Active: inactive (dead)
>
> ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ sudo service
> amphora-agent start
> Failed to start amphora-agent.service: Unit amphora-agent.service not found.
>
>
> How to proceed from here?
>
>
> On Wed, Nov 16, 2016 at 6:04 PM, 洪 赵 <hong.z...@live.com> wrote:
>>
>> After the amphora vm was created, the Octavia worker tried to plug VIP to
>> the amphora  vm, but failed. It could not connect to the amphora agent. You
>> may ssh to the vm and check if the networks and ip addresses are correctly
>> set.
>>
>>
>>
>> Good luck.
>>
>> -hzhao
>>
>>
>>
>> 发件人: Ganpat Agarwal
>> 发送时间: 2016年11月16日 14:40
>> 收件人: OpenStack Development Mailing List (not for usage questions)
>> 主题: [openstack-dev] [neutron][lbaasv2][octavia] Not able to create
>> loadbalancer
>>
>>
&g

Re: [openstack-dev] [neutron][lbaasv2][octavia] Not able to create loadbalancer

2016-11-16 Thread Michael Johnson
Hi Ganpat,

Yes, as hzhao mentioned, this error means that the controller was
unable to connect to the amphora over the management network.

Please check that this section is properly setup:
http://docs.openstack.org/developer/octavia/guides/dev-quick-start.html#load-balancer-network-configuration

You can also use our devstack plugin.sh script as a reference to how
we set it up in devstack environments:
https://github.com/openstack/octavia/blob/master/devstack/plugin.sh

Michael

On Tue, Nov 15, 2016 at 10:36 PM, Ganpat Agarwal
<gans.develo...@gmail.com> wrote:
> Hi All,
>
> I am using devstack stable/newton branch and have deployed octavia for
> neutron-lbaasv2.
>
> Here is my local.conf
>
> [[local|localrc]]
> HOST_IP=10.0.2.15
> DATABASE_PASSWORD=$ADMIN_PASSWORD
> MYSQL_PASSWORD=$ADMIN_PASSWORD
> RABBIT_PASSWORD=$ADMIN_PASSWORD
> SERVICE_PASSWORD=$ADMIN_PASSWORD
> SERVICE_TOKEN=tokentoken
> DEST=/opt/stack
>
> # Disable Nova Network and enable Neutron
> disable_service n-net
> enable_service q-svc
> enable_service q-agt
> enable_service q-dhcp
> enable_service q-l3
> enable_service q-meta
>
> # Enable LBaaS v2
> enable_plugin neutron-lbaas
> https://git.openstack.org/openstack/neutron-lbaas stable/newton
> enable_plugin octavia https://git.openstack.org/openstack/octavia
> stable/newton
> enable_service q-lbaasv2
> enable_service octavia
> enable_service o-cw
> enable_service o-hm
> enable_service o-hk
> enable_service o-api
>
> # Neutron options
> Q_USE_SECGROUP=True
> FLOATING_RANGE="172.18.161.0/24"
> FIXED_RANGE="10.0.0.0/24"
> Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
> PUBLIC_NETWORK_GATEWAY="172.18.161.1"
> PUBLIC_INTERFACE=eth0
>
> LOG=True
> VERBOSE=True
> LOGFILE=$DEST/logs/stack.sh.log
> LOGDAYS=1
> SCREEN_LOGDIR=$DEST/logs/screen
> SYSLOG=True
> SYSLOG_HOST=$HOST_IP
> SYSLOG_PORT=516
> RECLONE=yes
>
>
> While creating loadbalancer, i am getting error in octavia worker
>
> 2016-11-16 06:13:08.264 4115 INFO octavia.controller.queue.consumer [-]
> Starting consumer...
> 2016-11-16 06:14:58.507 4115 INFO octavia.controller.queue.endpoint [-]
> Creating load balancer '51082942-b348-4900-bde9-6d617dba8f99'...
> 2016-11-16 06:14:59.204 4115 INFO
> octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB
> with id 93e28edd-71ee-4448-bc70-b0424dbd64f5
> 2016-11-16 06:14:59.334 4115 INFO octavia.certificates.generator.local [-]
> Signing a certificate request using OpenSSL locally.
> 2016-11-16 06:14:59.336 4115 INFO octavia.certificates.generator.local [-]
> Using CA Certificate from config.
> 2016-11-16 06:14:59.336 4115 INFO octavia.certificates.generator.local [-]
> Using CA Private Key from config.
> 2016-11-16 06:14:59.337 4115 INFO octavia.certificates.generator.local [-]
> Using CA Private Key Passphrase from config.
> 2016-11-16 06:15:15.085 4115 INFO
> octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for
> amphora: 93e28edd-71ee-4448-bc70-b0424dbd64f5 with compute id
> f339b48d-1445-47e0-950b-ee69c2add81f for load balancer:
> 51082942-b348-4900-bde9-6d617dba8f99
> 2016-11-16 06:15:15.208 4115 INFO
> octavia.network.drivers.neutron.allowed_address_pairs [-] Port
> ac27cbb8-078d-47fd-824c-e95b0ebff392 already exists. Nothing to be done.
> 2016-11-16 06:15:39.708 4115 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
> instance. Retrying.
> 2016-11-16 06:15:47.712 4115 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
> instance. Retrying.
> 
> several 100 lines with same message
> 
> 2016-11-16 06:24:29.310 4115 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
> instance. Retrying.
> ^[2016-11-16 06:24:34.316 4115 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
> instance. Retrying.
> 2016-11-16 06:24:39.317 4115 ERROR
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries
> (currently set to 100) exhausted.  The amphora is unavailable.
> 2016-11-16 06:24:39.327 4115 WARNING
> octavia.controller.worker.controller_worker [-] Task
> 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug'
> (19f29ca9-3e7f-4629-b976-b4d24539d8ed) transitioned into state 'FAILURE'
> from state 'RUNNING'
> 33 predecessors (most recent first):
>   Atom
> 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs'
> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer':
> },
> 'provides': {u'93e28edd-71ee-4448-bc70-b0424dbd64f5':
>  0x7f8774263610>}}
>   |__Ato

Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-10 Thread Michael Johnson
Hi Wanjing,

Yes, active/standby is available in Mitaka.  You must enable it via
the octavia.conf file.

As for benchmarking, there has been some work done in this space (see
the octavia meeting logs last month), but it varies greatly depending
on how your cloud is configured and/or the hardware it is on.

Michael

On Thu, Nov 10, 2016 at 3:18 PM, Wanjing Xu (waxu) <w...@cisco.com> wrote:
> Thanks, Michael.  Now I have brought up this octavia.  I have a question:
> Is HA supported on octavia, or is it yet to come?  I am using
> stable/mitaka and I only see one amphorae vm launched per loadbalancer.
> And did anybody benchmark this octtavia against vender box?
>
> Regards!
>
> Wanjing
>
> On 11/7/16, 10:02 AM, "Michael Johnson" <johnso...@gmail.com> wrote:
>
>>Hi Wanjing,
>>
>>You are not seeing the network interfaces for the VIP and member
>>networks because they are inside a network namespace for security
>>reasons.  You can see these by issuing "sudo ip netns exec
>>amphora-haproxy ifconfig -a".
>>
>>I'm not sure what version of octavia and neutron you are using, but
>>the issue you noted about "dns_name" was fixed here:
>>https://review.openstack.org/#/c/337939/
>>
>>Michael
>>
>>
>>On Thu, Nov 3, 2016 at 11:29 AM, Wanjing Xu (waxu) <w...@cisco.com> wrote:
>>> Going through the log , I saw the following error on o-hm
>>>
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker request_ids=request_ids)
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker BadRequest: Unrecognized
>>> attribute(s) 'dns_name'
>>> 2016-11-03 03:31:06.441 19560 ERROR
>>> octavia.controller.worker.controller_worker Neutron server returns
>>> request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']
>>>
>>> And it seemed that I need to upgrade my neutron client.  While I am
>>>planning
>>> to do it, could somebody please send me the document on how this vip is
>>> supposed to plug into the lbaas vm and what the failover is about ?
>>>
>>> Thanks!
>>> Wanjing
>>>
>>>
>>> From: Cisco Employee <w...@cisco.com>
>>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>>> <openstack-dev@lists.openstack.org>
>>> Date: Wednesday, November 2, 2016 at 7:04 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> <openstack-dev@lists.openstack.org>
>>> Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able
>>>to
>>> ping loadbalancer ip
>>>
>>> So I bring up octavia using devstack (stable/mitaka).   I created a
>>> loadbalander and a listener(not create member yet) and start to look at
>>>how
>>> things are connected to each other.  I can ssh to amphora vm and I do
>>>see a
>>> haproxy is up with front end point to my listener.  I tried to ping
>>>(from
>>> dhcp namespace) to the loadbalancer ip, and ping could not go through.
>>>I am
>>> wondering how packet is supposed to reach this amphora vm.  I can see
>>>that
>>> the vm is launched on both network(lb_mgmt network and my vipnet), but I
>>> don¹t see any nic associated with my vipnet:
>>>
>>> ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
>>> eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
>>>   inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
>>>   inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
>>>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>>   RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
>>>   TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
>>>   collisions:0 txqueuelen:1000
>>>   RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)
>>>
>>> loLink encap:Local Loopback
>>>   inet addr:127.0.0.1  Mask:255.0.0.0
>>>   inet6 addr: ::1/128 Scope:Host
>>>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>>>   RX packets:212 errors:0 dropped:0 overruns:0 frame:0
>>>   TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
>>>   collisions:0 txqueuelen:0
>>>   

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-10 Thread Michael Johnson
Hi Gary,

The LBaaS DB table contents will be moved into the Octavia database as
part of the migration process/tool.

Michael

On Wed, Nov 9, 2016 at 11:13 PM, Gary Kotton <gkot...@vmware.com> wrote:
> Will the same DB be maintained or will the LBaaS DB be moved to that of
> Octavia. I am really concerned about this and I feel that it will cause
> production problems.
>
>
>
> From: Kevin Benton <ke...@benton.pub>
> Reply-To: OpenStack List <openstack-dev@lists.openstack.org>
> Date: Wednesday, November 9, 2016 at 11:43 PM
> To: OpenStack List <openstack-dev@lists.openstack.org>
>
>
> Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS
> retrospective and next steps recap
>
>
>
> The people working on the migration are ensuring API compatibility and are
> even leaving in a shim on the Neutron side for some time so you don't even
> have to change endpoints initially. It should be a seamless change.
>
>
>
> On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
>
> Just please don't make this a lbv3 thing that completely breaks
> compatibility of existing lb's yet again. If its just an "point url endpoint
> from thing like x to thing like y" in one place, thats ok. I still have v1
> lb's in existence though I have to deal with and a backwards incompatible v3
> would just cause me to abandon lbaas all together I think as it would show
> the lbaas stuff is just not maintainable.
>
> Thanks,
> Kevin
>
> 
>
> From: Armando M. [arma...@gmail.com]
> Sent: Wednesday, November 09, 2016 8:05 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS
> retrospective and next steps recap
>
>
>
>
>
> On 9 November 2016 at 05:50, Gary Kotton <gkot...@vmware.com> wrote:
>
> Hi,
> What about neutron-lbaas project? Is this project still alive and kicking to
> the merge is done or are we going to continue to maintain it? I feel like we
> are between a rock and a hard place here. LBaaS is in production and it is
> not clear the migration process. Will Octavia have the same DB models as
> LBaaS or will there be a migration?
> Sorry for the pessimism but I feel that things are very unclear and that we
> cannot even indicate to our community/consumers what to use/expect.
> Thanks
> Gary
>
>
>
> http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
>
>
>
>
> On 11/8/16, 1:36 AM, "Michael Johnson" <johnso...@gmail.com> wrote:
>
> Ocata LBaaS retrospective and next steps recap
> --
>
> This session lightly touched on the work in the newton cycle, but
> primarily focused on planning for the Ocata release and the LBaaS spin
> out of neutron and merge into the octavia project [1].  Notes were
> captured on the etherpad [1].
>
> The focus of work for Ocata in neutron-lbaas and octavia will be on
> the spin out/merge and not new features.
>
> Work has started on merging neutron-lbaas into the octavia project
> with API sorting/pagination, quota support, keystone integration,
> neutron-lbaas driver shim, and documentation updates.  Work is still
> needed for policy support, the API shim to handle capability gaps
> (example: stats are by listener in octavia, but by load balancer in
> neturon-lbaas), neutron api proxy, a database migration script from
> the neutron database to the octavia database for existing non-octavia
> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
> the octavia API server.
>
> The room agreed that since we will have a shim/proxy in neutron for
> some time, updating the OpenStack client can be deferred to a future
> cycle.
>
> There is a lot of concern about Ocata being a short cycle and the
> amount of work to be done.  There is hope that additional resources
> will help out with this task to allow us to complete the spin
> out/merge for Ocata.
>
> We discussed the current state of the active/active topology patches
> and agreed that it is unlikely this will merge in Ocata.  There are a
> lot of open comments and work to do on the patches.  It appears that
> these patches may have been created against an old release and require
> significant updating.
>
> Finally there was a question about when octavia would implement
> metadata tags.  When we dug into the need for the tags we found that
> what was really wanted is a full impleme

Re: [openstack-dev] [nova] vendordata v2 ocata summit recap

2016-11-09 Thread Michael Still
This is a good summary, thanks. I finally uploaded the spec which describes
the decisions from the summit. Its here:

https://review.openstack.org/395959

Michael

On Thu, Nov 10, 2016 at 7:11 AM, Matt Riedemann <mrie...@linux.vnet.ibm.com>
wrote:

> Michael Still led a session on completing the vendordata v2 work that was
> started in the Newton release. The full etherpad is here:
>
> https://etherpad.openstack.org/p/ocata-nova-summit-vendoradatav2
>
> Michael started by advertising a bit what it is since it's a new feature
> and it's meant to replace the old class-path loading way of getting vendor
> metadata (and ultimately allows us to remove hooks).
>
> The majority of the session was spent discussing a gap we have in
> providing token information on the request to the vendordata server.
>
> For example, when creating a server we have a user content and token and
> can provide that information to the vendordata REST API, but on subsequent
> GETs from the guest itself we don't have a token. After quite a bit of
> discussion in the room, including with Adam and Dolph from the keystone
> team, we decided to:
>
> 1. Stash the user's roles from the initial create in the nova database and
> re-use those on subsequent GET requests.
>
> 2. Use a service token to pass the other information to the vendordata v2
> REST API so that it knows the request is coming from Nova. This was
> considered a bug fix and not a new feature so we can backport the
> functionality.
>
> Other things that are needed at some point:
>
> 1. Add some caching of the response using the Cache-Control header.
>
> 2. Add a configuration option to toggle whether or not the server create
> should fail if a vendordata response is not 200. Today if we get a non-200
> response we log a warning and return {} to the caller. Some vendordata
> scenarios require that the metadata get into the guest as soon as it's
> created or else it becomes essentially a zombie and cleaning it up later is
> painful. So provide an option to fail that server create if we can't get
> the necessary data into the guest on server create. Note that this would
> only fail the server build if using config drive since nova is the caller.
> When cloud-init is making the request from within the guest, nova has lost
> control at that point and any failures are going to have to be cleaned up
> separately.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Michael Johnson
Kevin,

Yep, totally understand.

This is not a V3, it is simply moving the API from running under
neutron to running under the octavia API process.  It will still be
the LBaaSv2 API, just a new endpoint (though the old endpoint will
work for some time into the future).

Michael

On Wed, Nov 9, 2016 at 12:28 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
> Just please don't make this a lbv3 thing that completely breaks
> compatibility of existing lb's yet again. If its just an "point url endpoint
> from thing like x to thing like y" in one place, thats ok. I still have v1
> lb's in existence though I have to deal with and a backwards incompatible v3
> would just cause me to abandon lbaas all together I think as it would show
> the lbaas stuff is just not maintainable.
>
> Thanks,
> Kevin
> 
> From: Armando M. [arma...@gmail.com]
> Sent: Wednesday, November 09, 2016 8:05 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS
> retrospective and next steps recap
>
>
>
> On 9 November 2016 at 05:50, Gary Kotton <gkot...@vmware.com> wrote:
>>
>> Hi,
>> What about neutron-lbaas project? Is this project still alive and kicking
>> to the merge is done or are we going to continue to maintain it? I feel like
>> we are between a rock and a hard place here. LBaaS is in production and it
>> is not clear the migration process. Will Octavia have the same DB models as
>> LBaaS or will there be a migration?
>> Sorry for the pessimism but I feel that things are very unclear and that
>> we cannot even indicate to our community/consumers what to use/expect.
>> Thanks
>> Gary
>
>
> http://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
>
>>
>>
>> On 11/8/16, 1:36 AM, "Michael Johnson" <johnso...@gmail.com> wrote:
>>
>> Ocata LBaaS retrospective and next steps recap
>> --
>>
>> This session lightly touched on the work in the newton cycle, but
>> primarily focused on planning for the Ocata release and the LBaaS spin
>> out of neutron and merge into the octavia project [1].  Notes were
>> captured on the etherpad [1].
>>
>> The focus of work for Ocata in neutron-lbaas and octavia will be on
>> the spin out/merge and not new features.
>>
>> Work has started on merging neutron-lbaas into the octavia project
>> with API sorting/pagination, quota support, keystone integration,
>> neutron-lbaas driver shim, and documentation updates.  Work is still
>> needed for policy support, the API shim to handle capability gaps
>> (example: stats are by listener in octavia, but by load balancer in
>> neturon-lbaas), neutron api proxy, a database migration script from
>> the neutron database to the octavia database for existing non-octavia
>> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
>> the octavia API server.
>>
>> The room agreed that since we will have a shim/proxy in neutron for
>> some time, updating the OpenStack client can be deferred to a future
>> cycle.
>>
>> There is a lot of concern about Ocata being a short cycle and the
>> amount of work to be done.  There is hope that additional resources
>> will help out with this task to allow us to complete the spin
>> out/merge for Ocata.
>>
>> We discussed the current state of the active/active topology patches
>> and agreed that it is unlikely this will merge in Ocata.  There are a
>> lot of open comments and work to do on the patches.  It appears that
>> these patches may have been created against an old release and require
>> significant updating.
>>
>> Finally there was a question about when octavia would implement
>> metadata tags.  When we dug into the need for the tags we found that
>> what was really wanted is a full implementation of the flavors
>> framework [3] [4].  Some vendors expressed interest in finishing the
>> flavors framework for Octavia.
>>
>> Thank you to everyone that participated in our design session and
>> etherpad.
>>
>> Michael
>>
>> [1]
>> https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
>> [2]
>> https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
>> [3]
>>

Re: [openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-09 Thread Michael Johnson
Hi Gary,

Our intent is to merge neutron-lbaas into the Octavia project.  When
this is complete, the neutron-lbaas project will remain for some time
as a light weight shim/proxy that provides the legacy neutron endpoint
experience.

The database models are already very similar to the existing
neutron-lbaas models (by design) and we will finish aligning these as
part of the merge work.  For example, the names that were added to
some objects will be added in the octavia database as well.

We are also planing a migration from the neutron LBaaSv2 database to
the octavia database.  This should not impact existing running load
balancers.

Michael



On Wed, Nov 9, 2016 at 5:50 AM, Gary Kotton <gkot...@vmware.com> wrote:
> Hi,
> What about neutron-lbaas project? Is this project still alive and kicking to 
> the merge is done or are we going to continue to maintain it? I feel like we 
> are between a rock and a hard place here. LBaaS is in production and it is 
> not clear the migration process. Will Octavia have the same DB models as 
> LBaaS or will there be a migration?
> Sorry for the pessimism but I feel that things are very unclear and that we 
> cannot even indicate to our community/consumers what to use/expect.
> Thanks
> Gary
>
> On 11/8/16, 1:36 AM, "Michael Johnson" <johnso...@gmail.com> wrote:
>
> Ocata LBaaS retrospective and next steps recap
> --
>
> This session lightly touched on the work in the newton cycle, but
> primarily focused on planning for the Ocata release and the LBaaS spin
> out of neutron and merge into the octavia project [1].  Notes were
> captured on the etherpad [1].
>
> The focus of work for Ocata in neutron-lbaas and octavia will be on
> the spin out/merge and not new features.
>
> Work has started on merging neutron-lbaas into the octavia project
> with API sorting/pagination, quota support, keystone integration,
> neutron-lbaas driver shim, and documentation updates.  Work is still
> needed for policy support, the API shim to handle capability gaps
> (example: stats are by listener in octavia, but by load balancer in
> neturon-lbaas), neutron api proxy, a database migration script from
> the neutron database to the octavia database for existing non-octavia
> load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
> the octavia API server.
>
> The room agreed that since we will have a shim/proxy in neutron for
> some time, updating the OpenStack client can be deferred to a future
> cycle.
>
> There is a lot of concern about Ocata being a short cycle and the
> amount of work to be done.  There is hope that additional resources
> will help out with this task to allow us to complete the spin
> out/merge for Ocata.
>
> We discussed the current state of the active/active topology patches
> and agreed that it is unlikely this will merge in Ocata.  There are a
> lot of open comments and work to do on the patches.  It appears that
> these patches may have been created against an old release and require
> significant updating.
>
> Finally there was a question about when octavia would implement
> metadata tags.  When we dug into the need for the tags we found that
> what was really wanted is a full implementation of the flavors
> framework [3] [4].  Some vendors expressed interest in finishing the
> flavors framework for Octavia.
>
> Thank you to everyone that participated in our design session and 
> etherpad.
>
> Michael
>
> [1] 
> https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
> [2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
> [3] 
> https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
> [4] 
> https://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][kolla][security] pycrypto vs cryptography

2016-11-08 Thread Gardiner Michael
Hey Guys,

If FIPS 140-2 compliance is important you might want to look at something
like a PKCS#11 wrapper and let your PKCS#11 complaint module be the deciding
factor in meeting that compliance level.  There are wrappers for most
languages.  (We have our own python p11 implementation tailored to our Luna
HSMs here https://github.com/gemalto/pycryptoki but you should be able to
use a more generic project if you choose)  

There are other commonly used APIs such as OpenSSL, Java JCA/JCE and MS
CAPI/CNG but given that we're talking about python on linux a PKCS #11
approach is probably best.

Beyond just "140-2" there are different levels.  Pure software
implementations are limited to level 1. Level 2, 3, and 4 require hardware
and have more strict requirements as you go up the chain.  Someone asking
for FIPS 140-2 compliance will also generally have a minimum level that they
require.

I do work for a vendor of hardware security modules and so I have biases
towards our solutions but without getting into any of that I do believe if
you want to take FIPS into consideration you should stick to a broadly
adopted crypto API that allows you to switch out the back end module.  

Cheers,

Mike Gardiner
Systems Security Architect
Gemalto

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: November-06-16 11:44 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [requirements][kolla][security] pycrypto vs
cryptography

On 2016-11-06 14:59:03 + (+), Jeremy Stanley wrote:
> On 2016-11-06 08:05:51 + (+), Steven Dake (stdake) wrote:
[...]
> > An orthogonal question I have received from one of our community 
> > members (Pavo on irc) is whether pycrypto (or if we move to
> > cryptography) provide FIPS-140-2 compliance.
> 
> My understanding is that if you need, for example, a FIPS-compliant 
> AES implementation under the hood, then this is dependent more on what 
> backend libraries you're using... e.g., 
> https://www.openssl.org/docs/fips.html
> https://www.openssl.org/docs/fipsvalidation.html

I should clarify, I was referring specifically to pyca/cryptography's
OpenSSL backend. In contrast the pycrypto maintainers seem to have copied
and forked a variety of algorithms (some of which seem to be based NIST/FIPS
reference implementations for C or backports from bits of Py3K stdlib but
have undergone subsequent modification), so very likely have not been put
through any sort of direct compliance validation:
https://github.com/dlitz/pycrypto/blob/master/src/AES.c
https://github.com/dlitz/pycrypto/blob/master/src/SHA512.c
et cetera...
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [lbaas] [octavia] Ocata LBaaS retrospective and next steps recap

2016-11-07 Thread Michael Johnson
Ocata LBaaS retrospective and next steps recap
--

This session lightly touched on the work in the newton cycle, but
primarily focused on planning for the Ocata release and the LBaaS spin
out of neutron and merge into the octavia project [1].  Notes were
captured on the etherpad [1].

The focus of work for Ocata in neutron-lbaas and octavia will be on
the spin out/merge and not new features.

Work has started on merging neutron-lbaas into the octavia project
with API sorting/pagination, quota support, keystone integration,
neutron-lbaas driver shim, and documentation updates.  Work is still
needed for policy support, the API shim to handle capability gaps
(example: stats are by listener in octavia, but by load balancer in
neturon-lbaas), neutron api proxy, a database migration script from
the neutron database to the octavia database for existing non-octavia
load balancers, and adding the "bug for bug" neutron-lbaas v2 API to
the octavia API server.

The room agreed that since we will have a shim/proxy in neutron for
some time, updating the OpenStack client can be deferred to a future
cycle.

There is a lot of concern about Ocata being a short cycle and the
amount of work to be done.  There is hope that additional resources
will help out with this task to allow us to complete the spin
out/merge for Ocata.

We discussed the current state of the active/active topology patches
and agreed that it is unlikely this will merge in Ocata.  There are a
lot of open comments and work to do on the patches.  It appears that
these patches may have been created against an old release and require
significant updating.

Finally there was a question about when octavia would implement
metadata tags.  When we dug into the need for the tags we found that
what was really wanted is a full implementation of the flavors
framework [3] [4].  Some vendors expressed interest in finishing the
flavors framework for Octavia.

Thank you to everyone that participated in our design session and etherpad.

Michael

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/newton/kill-neutron-lbaas.html
[2] https://etherpad.openstack.org/p/ocata-neutron-octavia-lbaas-session
[3] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/neutron-flavor-framework-templates.html
[4] 
https://specs.openstack.org/openstack/neutron-specs/specs/liberty/neutron-flavor-framework.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-07 Thread Michael Johnson
Hi Wanjing,

You are not seeing the network interfaces for the VIP and member
networks because they are inside a network namespace for security
reasons.  You can see these by issuing "sudo ip netns exec
amphora-haproxy ifconfig -a".

I'm not sure what version of octavia and neutron you are using, but
the issue you noted about "dns_name" was fixed here:
https://review.openstack.org/#/c/337939/

Michael


On Thu, Nov 3, 2016 at 11:29 AM, Wanjing Xu (waxu) <w...@cisco.com> wrote:
> Going through the log , I saw the following error on o-hm
>
> 2016-11-03 03:31:06.441 19560 ERROR
> octavia.controller.worker.controller_worker request_ids=request_ids)
> 2016-11-03 03:31:06.441 19560 ERROR
> octavia.controller.worker.controller_worker BadRequest: Unrecognized
> attribute(s) 'dns_name'
> 2016-11-03 03:31:06.441 19560 ERROR
> octavia.controller.worker.controller_worker Neutron server returns
> request_ids: ['req-1daed46e-ce79-471c-a0af-6d86d191eeb2']
>
> And it seemed that I need to upgrade my neutron client.  While I am planning
> to do it, could somebody please send me the document on how this vip is
> supposed to plug into the lbaas vm and what the failover is about ?
>
> Thanks!
> Wanjing
>
>
> From: Cisco Employee <w...@cisco.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Date: Wednesday, November 2, 2016 at 7:04 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to
> ping loadbalancer ip
>
> So I bring up octavia using devstack (stable/mitaka).   I created a
> loadbalander and a listener(not create member yet) and start to look at how
> things are connected to each other.  I can ssh to amphora vm and I do see a
> haproxy is up with front end point to my listener.  I tried to ping (from
> dhcp namespace) to the loadbalancer ip, and ping could not go through.  I am
> wondering how packet is supposed to reach this amphora vm.  I can see that
> the vm is launched on both network(lb_mgmt network and my vipnet), but I
> don’t see any nic associated with my vipnet:
>
> ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
> eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
>   inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
>   inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)
>
> loLink encap:Local Loopback
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet6 addr: ::1/128 Scope:Host
>   UP LOOPBACK RUNNING  MTU:65536  Metric:1
>   RX packets:212 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:18248 (18.2 KB)  TX bytes:18248 (18.2 KB)
>
> localadmin@dmz-eth2-ucs1:~/devstack$ nova list
> +--+--+++-+---+
> | ID   | Name
> | Status | Task State | Power State | Networks
> |
> +--+--+++-+---+
> | 557a3de3-a32e-419d-bdf5-41d92dd2333b |
> amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699 | ACTIVE | -  | Running
> | lb-mgmt-net=192.168.0.4; vipnet=100.100.100.4 |
> +--+--+++-+———+
>
> And it seemed that amphora created a port from the vipnet for its vrrp_ip,
> but now sure how it is used and how it is supposed to help packet to reach
> loadbalancer ip
>
> It will be great if somebody can help on this, especially on network side.
>
> Thanks
> Wanjing
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][octavia] Spec: Deploy Octavia with OpenStack-Ansible

2016-11-07 Thread Michael Johnson
Thank you Major!

I will try to get a pass on this early in the week.

I agree with you that taking this one step at a time is probably best.
TLS offloading (requiring barbican) is a common use case, but we can
work on that as a follow up.

Michael

On Wed, Nov 2, 2016 at 6:42 AM, Major Hayden <ma...@mhtx.net> wrote:
> Hey folks,
>
> I drafted a spec yesterday for deploying Octavia with OpenStack-Ansible.  The 
> spec review[0] is pending and you can go straight to the rendered version[1] 
> if you want to take a look.
>
> We proposed this before in the Liberty release, but we ended up implementing 
> only LBaaSv2 with the agent-based load balancers.  Octavia has come a long 
> way and is definitely ready for use in Newton/Ocata.
>
> Most of the spec is fairly straightforward, but there are still two open 
> questions that may need to be answered in the implementation steps:
>
> 1) Do we generate the amphora (LB) image on the fly
>with DIB with each deployment? Or, do we pre-build
>it and download it during the deployment?
>
> It might be easier to use DIB in the development stages and then figure out a 
> cached image solution as the role becomes a little more mature.
>
> 2) Do we want to implement SSL offloading (Barbican
>is required) now or tackle that later?
>
> I'd lean towards deploying Octavia without SSL offloading first, and then add 
> in the Barbican support afterwards.  My gut says it's better to the basic 
> functionality working well first before we begin adding on extras.
>
> Your feedback is definitely welcomed! :)
>
> [0] https://review.openstack.org/392205
> [1] 
> http://docs-draft.openstack.org/05/392205/2/check/gate-openstack-ansible-specs-docs-ubuntu-xenial/8f1eec1//doc/build/html/specs/ocata/octavia.html
>
> --
> Major Hayden
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [neutron-lbaas][octavia] Error in installing octavia via devstack on ubuntu stable/mitaka

2016-11-07 Thread Michael Johnson
Nope, we included these changes in Newton.

Michael


On Tue, Nov 1, 2016 at 4:11 AM, Ihar Hrachyshka <ihrac...@redhat.com> wrote:
> Michael Johnson <johnso...@gmail.com> wrote:
>
>> Hi Wanjing,
>>
>> I responded to you in IRC but you may have logged off by the time I
>> was able to respond.
>>
>> You are correct that this is an issue.  We just this week noticed it.
>> It was an oversight that is causing the amphora agent to clone the
>> master branch amphora agent code instead of the stable/mitaka version.
>>
>> There are two patches up that will fix this:
>> https://review.openstack.org/390896
>> https://review.openstack.org/391063
>>
>
> (For the latter ^) Don't we need a similar one for stable/newton?
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   8   9   10   >