Re: [openstack-dev] [ServiceVM] IRC meeting minutes June 3, 2014 5:00(AM)UTC-)

2014-06-08 Thread Dmitry
I'm mostly interested in ServiceVM cooperation with NFV working group.
The most important is to synchronize the terminology and to discuss the
plan for the cooperation.
I would happy to join the meeting, my timezone is UTC+2 (Jerusalem).
Thanks,
Dmitry


On Fri, Jun 6, 2014 at 7:03 AM, Isaku Yamahata isaku.yamah...@gmail.com
wrote:

 Hi Dmitry. Thanks for your interest.

 What's your time zone? In fact we have already many time zones.
 http://lists.openstack.org/pipermail/openstack-dev/2014-March/030405.html
 If desirable, we could think about rotating timezones.

 Do you have specific items to discuss?
 We could also arrange ad-hoc irc meetings for specific topics.

 thanks,

 On Thu, Jun 05, 2014 at 05:58:53PM +0300,
 Dmitry mey...@gmail.com wrote:

  Hi Isaku,
  In order to make possible to European audience to join ServiceVM
 meetings,
  could you please to move it 2-3 hours later (7-8AM UTC)?
  Thank you very much,
  Dmitry
 
 
  On Tue, Jun 3, 2014 at 10:00 AM, Isaku Yamahata 
 isaku.yamah...@gmail.com
  wrote:
 
   Here is the meeting minutes of the meeting.
  
   ServiceVM/device manager
   meeting minutes on June 3, 2014:
 https://wiki.openstack.org/wiki/Meetings/ServiceVM
  
   next meeting:
 June 10, 2014 5:00AM UTC (Tuesday)
  
   agreement:
   - include NFV conformance to servicevm project into servicevm project
 = will continue discuss on nomenclature at gerrit. tacker-specs
   - we have to define the relationship between NFV team and servicevm
 team
   - consolidate floating implementations
  
   Action Items:
   - everyone add your name/bio to contributor of incubation page
   - yamahata create tacker-specs repo in stackforge for further
 discussion
 on terminology
   - yamahata update draft to include NFV conformance
   - s3wong look into vif creation/network connection
   - everyone review incubation page
  
   Detailed logs:
  
  
 http://eavesdrop.openstack.org/meetings/servicevm_device_manager/2014/servicevm_device_manager.2014-06-03-05.04.html
  
  
 http://eavesdrop.openstack.org/meetings/servicevm_device_manager/2014/servicevm_device_manager.2014-06-03-05.04.log.html
  
   thanks,
   --
   Isaku Yamahata isaku.yamah...@gmail.com
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  

  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Isaku Yamahata isaku.yamah...@gmail.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-08 Thread Mark McLoughlin
Hi John,

On Fri, 2014-06-06 at 13:59 -0600, John Griffith wrote:

 On Fri, Jun 6, 2014 at 1:55 PM, John Griffith john.griff...@solidfire.com 
 wrote:

 On Fri, Jun 6, 2014 at 1:23 PM, Mark McLoughlin mar...@redhat.com 
 wrote:
 On Fri, 2014-06-06 at 13:29 -0400, Anita Kuno wrote:
  The issue I have with the word certify is that it
 requires someone or a
  group of someones to attest to something. The thing
 attested to is only
  as credible as the someone or the group of someones
 doing the attesting.
  We have no process, nor do I feel we want to have a
 process for
  evaluating the reliability of the somones or groups
 of someones doing
  the attesting.
 
  I think that having testing in place in line with
 other programs testing
  of patches (third party ci) in cinder should be
 sufficient to address
  the underlying concern, namely reliability of
 opensource hooks to
  proprietary code and/or hardware. I would like the
 use of the word
  certificate and all its roots to no longer be used
 in OpenStack
  programs with regard to testing. This won't happen
 until we get some
  discussion and agreement on this, which I would like
 to have.
 
 
 Thanks for bringing this up Anita. I agree that
 certified driver or
 similar would suggest something other than I think we
 mean.
  
 ​Can you expand on the above comment?  In other words a bit
 more about what you mean.  I think from the perspective of a
 number of people that participate in Cinder the intent is in
 fact to say.  Maybe it would help clear some things up for
 folks that don't see why this has become a debatable issue.

Fair question. I didn't elaborate initially because I thought Anita
covered it pretty well.

 By running CI tests successfully that it is in fact a ​way of
 certifying that our device and driver is in fact 'certified'
 to function appropriately and provide the same level of API
 and behavioral compatability as the default components as
 demonstrated by running CI tests on each submitted patch.

My view is that certification is an attestation that someone can take
the certified combination of a driver and whatever vendor product it is
associated with, and the combination will be fit for purpose in any of
the configurations that it supports.

To achieve anything close to that, we'd need to be explicit about what
distros, deployment tools, OpenStack configurations and vendor
configurations must be supported. And it would be fairly strange for us
to do that considering the way OpenStack just ships tarballs currently
rather than a fully deployable thing.

Also AIUI certification implies some level of warranty or guarantee,
which goes against the pretty clear language WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND in our license :)

Basically, I think there's a world of difference between what's expected
of a certification body and what a technical community like ours should
IMHO be undertaking in terms of providing information about how
functional and maintained drivers are.

(To be clear, I love any that we're trying to surface information about
how well maintained and tested drivers are)

 Personally I believe part of the contesting of the phrases and
 terms is partly due to the fact that a number of organizations
 have their own certification programs and tests.  I think
 that's great, and they in fact provide some form of
 certification that a device works in their environment and
 to their expectations.  

Also fair, and I should be careful to be clear about my Red Hat bias on
this. I am speaking here with my upstream hat on - i.e. thinking about
what's good for the project, not necessarily Red Hat - but I'm
definitely influenced about the meaning of certification by knowing a
little about Red Hat's product certification program.

 Doing this from a general OpenStack integration perspective
 doesn't seem all that different to me.  For the record, my
 initial response to this was that I didn't have too much
 preference on what it was called (verification, certification
 etc etc), however there seems to be a large number of people
 (not product vendors for what it's worth) that feel
 differently.
 


   On Fri, Jun 6, 2014 at 1:23 PM, Mark McLoughlin mar...@redhat.com 
 wrote:
 And, for 

[openstack-dev] Python API of python-.+client

2014-06-08 Thread Michael Bright
I'm interested to know what is the status of the Python API of the
python-novaclient, and the Python APIs of other OpenStack clients.

On the github page https://github.com/openstack/python-novaclient/ it is
written:
*There's also a complete Python API, but it has not yet been
documented.*

Having written some bash scripts to automate some tasks this week I thought
I should really have done this in Python, but then
when I see this comment this discourages me - but more importantly raises
many questions.
There are also few examples available on the web for these APIs - though I
have used them in the past for some v. small scripts.

I ask these questions about python-novaclient, but am also interested in
how they apply to other OpenStack clients.

  * Are people actually using the Python API ?
If so, is it as stable, or more or less stable than the command-line
client ?
If not why not - are you using other APIs, or just bash scripting
around the command-line client ?

  * What are the plans, if any, to improve the situation ?
Is it just a question of someone stepping up and writing documentation ?
Is there a clear idea of what needs to be done ?
Are there bugs open against the documentation for this API (Sorry not
to spend the time to search right now ...)

I'd certainly like to contribute to the documentation if this is considered
worthwhile ... I'm just surprised that this API seems to be
unused.

Interested to hear your thoughts/experiences.
Thanks,
Mike.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Python API of python-.+client

2014-06-08 Thread Jeremy Stanley
On 2014-06-08 10:35:26 +0200 (+0200), Michael Bright wrote:
 On the github page it is written:
 There's also a complete Python API, but it has not yet been documented.
[...]

I think the README in python-novaclient could probably use a little
updating. There is no official narrative documentation I'm aware
of comprehensively covering it, but there is a complete reference
manual at http://docs.openstack.org/developer/python-novaclient/ if
that's what you're looking for.

 I ask these questions about python-novaclient, but am also interested in how
 they apply to other OpenStack clients.

http://docs.openstack.org/developer/python-swiftclient/
http://docs.openstack.org/developer/python-cinderclient/
http://docs.openstack.org/developer/python-glanceclient/
http://docs.openstack.org/developer/python-neutronclient/

et cetera. I agree that the current discoverability of these is not
great and could use improvement, but there is currently work
underway to make http://developer.openstack.org/ more functional in
this regard. There's also another ongoing effort to hopefully help
unify the various Python APIs:

http://docs.openstack.org/developer/python-openstackclient/

 * Are people actually using the Python API? If so, is it as
 stable, or more or less stable than the command-line client?
[...]

Sure! For example the OpenStack project developer infrastructure
automation is a consumer of them:

URL: 
http://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/provider_manager.py#n21
 

 I'd certainly like to contribute to the documentation if this is
 considered worthwhile ... I'm just surprised that this API seems
 to be unused.
[...]

I'm sure the novaclient and documentation teams would love to have
help improving the status quo. You may also want to post a more
specific offer to help on the openstack-d...@lists.openstack.org
mailing list:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs

-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-08 Thread Roman Podoliaka
Hi Deva,

I haven't actually touched Ironic db migrations tests code yet, but
your feedback is very valuable for oslo.db maintainers, thank you!

So currently, there are two ways to run migrations tests:
1. Opportunistically (using openstack_citest user credentials; this is
how we test migrations in the gates in Nova/Glance/Cinder/etc). I'm
surprised we don't provide this out-of-box in common db code.
2. By providing database credentials in test_migrations.conf
(Nova/Glance/Cinder/etc have test_migrations.conf, though I haven't
ever tried to put mysql/postgresql credentials there and run unit
tests).

The latter came to common db code directly from Nova and I'm not sure
if anyone is using the incubator version of it in the consuming
projects. Actually, I'd really like us to drop this feature and stick
to the opportunistic tests of migrations (fyi, there is a patch on
review to oslo.db [1]) to ensure there is only one way to run the
migrations tests and this the way we run the tests in the gates.

[1] uses opportunistic DB test cases provided by oslo.db to prevent
race conditions: a db is created on demand per test (which is
obviously not fast, but safe and easy). And it's perfectly normal to
use a separate db per migrations test case, as this is a kind of a
test that needs total control on the database, which can not be
provided even by using of high transaction isolation levels (so
unfortunately we can't use the solution proposed by Mike here).

Migration tests using test_migrations.conf, on the other hand, leave
it up to you how to isolate separate test cases using the same
database. You could use file locks, put them on each conflicting test
case to prevent race conditions, but this is not really handy, of
course.

Overall, I think, this is a good example of a situation when we put
code to incubator when it wasn't really ready to be reused by other
projects. We should have added docs at least on how to use those
migrations tests properly. This is something we should become better
at as a team.

Ok, so at least we know about the problem and [1] should make it
easier for everyone in the consuming projects to run their migrations
tests.

Thanks,
Roman

[1] https://review.openstack.org/#/c/93424/

On Sat, Jun 7, 2014 at 3:12 AM, Devananda van der Veen
devananda@gmail.com wrote:
 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread (remember, it calls
 _reset_databases) blows up the other test.

 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table
 'test_migrations.alembic_version' doesn't exist)

 As far as I can tell, this is all coming from:

 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111


 So, Ironic devs -- if you see a DB migration proposed, pay extra attention
 to it. We aren't running migration tests in our check or gate queues right
 now, and we shouldn't enable them until this fixed.

 Regards,
 Devananda

 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-08 Thread Roman Podoliaka
Hi Mike,

 However, when testing an application that uses a fixed set of tables, as 
 should be the case for the majority if not all Openstack apps, there’s no 
 reason that these tables need to be recreated for every test.

This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:

 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
 wrote:

 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread (remember, it calls
 _reset_databases) blows up the other test.

 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table
 'test_migrations.alembic_version' doesn't exist)

 As far as I can tell, this is all coming from:

 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111


 Hello -

 Just an introduction, I’m Mike Bayer, the creator of SQLAlchemy and Alembic
 migrations. I’ve just joined on as a full time Openstack contributor,
 and trying to help improve processes such as these is my primary
 responsibility.

 I’ve had several conversations already about how migrations are run within
 test suites in various openstack projects.   I’m kind of surprised by this
 approach of dropping and recreating the whole database for individual tests.
 Running tests in parallel is obviously made very difficult by this style,
 but even beyond that, a lot of databases don’t respond well to lots of
 dropping/rebuilding of tables and/or databases in any case; while SQLite and
 MySQL are probably the most forgiving of this, 

Re: [openstack-dev] Python API of python-.+client

2014-06-08 Thread Michael Bright
Thanks a lot Jeremy and Brian, a lot of useful information there.

I'll have a good look at all this to see where I can help.

Regards,
Mike.



On 8 June 2014 17:07, Brian Curtin br...@python.org wrote:

 On Sun, Jun 8, 2014 at 3:35 AM, Michael Bright mjbrigh...@gmail.com
 wrote:
 
  I'm interested to know what is the status of the Python API of the
  python-novaclient, and the Python APIs of other OpenStack clients.
 
  On the github page it is written:
  There's also a complete Python API, but it has not yet been
  documented.
 
  Having written some bash scripts to automate some tasks this week I
 thought
  I should really have done this in Python, but then
  when I see this comment this discourages me - but more importantly raises
  many questions.
  There are also few examples available on the web for these APIs - though
 I
  have used them in the past for some v. small scripts.
 
  I ask these questions about python-novaclient, but am also interested in
 how
  they apply to other OpenStack clients.
 
* Are people actually using the Python API ?
  If so, is it as stable, or more or less stable than the command-line
  client ?
  If not why not - are you using other APIs, or just bash scripting
 around
  the command-line client ?
 
* What are the plans, if any, to improve the situation ?
  Is it just a question of someone stepping up and writing
 documentation ?
  Is there a clear idea of what needs to be done ?
  Are there bugs open against the documentation for this API (Sorry
 not to
  spend the time to search right now ...)

 There's currently a new effort going on right now to reimagine all of
 those separate per-service tools into one complete SDK (libraries,
 CLIs, docs, examples, etc in one place), but it's early in the
 process. https://github.com/stackforge/python-openstacksdk and
 https://wiki.openstack.org/wiki/PythonOpenStackSDK have some details,
 and we're actively meeting on Tuesdays at 1900, and hang out in the
 #openstack-sdks room.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-08 Thread Mike Bayer

On Jun 8, 2014, at 11:46 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

 
 Overall, the approach with executing a test within a transaction and
 then emitting ROLLBACK worked quite well. The only problem I ran into
 were tests doing ROLLBACK on purpose. But you've updated the recipe
 since then and this can probably be solved by using of save points.

yup, I went and found the gist, that is here:

https://gist.github.com/zzzeek/8443477



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-08 Thread Mike Bayer

On Jun 7, 2014, at 4:38 PM, Eugene Nikanorov enikano...@mirantis.com wrote:

 Hi folks,
 
 There was a small discussion about the better way of doing sql operations for 
 vni synchronization with the config.
 Initial proposal was to handle those in chunks. Carl also suggested to issue 
 a single sql query.
 I've did some testing with my sql and postgress.
 I've tested the following scenario: vxlan range is changed from 5:15 
 to 0:10 and vice versa.
 That involves adding and deleting 5 vni in each test.
 
 Here are the numbers:
 50k vnis to add/deletePg adding vnis  Pg deleting vnisPg 
 TotalMysql adding vnisMysql deleting vnisMysql total
 non-chunked sql   23  22  45  14  20  34
 chunked in 10020   17 37  14   17 31
 
 I've done about 5 tries to get each number to minimize random floating factor 
 (due to swaps, disc or cpu activity or other factors)
 That might be surprising that issuing multiple sql statements instead one big 
 is little bit more efficient, so I would appreciate if someone could 
 reproduce those numbers.
 Also I'd like to note that part of code that iterates over vnis fetched from 
 db is taking 10 seconds both on mysql and postgress and is a part of 
 deleting vnis numbers.
 In other words, difference between multiple DELETE sql statements and single 
 one is even bigger (in percent) than these numbers show.
 
 The code which I used to test is here: http://paste.openstack.org/show/83298/
 Right now the chunked version is commented out, so to switch between versions 
 some lines should be commented and some - uncommented.

I’ve taken a look at this, though I’m not at the point where I have things set 
up to run things like this within full context, and I don’t know that I have 
any definitive statements to make, but I do have some suggestions:

1. I do tend to chunk things a lot, selects, deletes, inserts, though the chunk 
size I work with is typically more like 1000, rather than 100.   When chunking, 
we’re looking to select a size that doesn’t tend to overload the things that 
are receiving the data (query buffers, structures internal to both SQLAlchemy 
as well as the DBAPI and the relational database), but at the same time doesn’t 
lead to too much repetition on the Python side (where of course there’s a lot 
of slowness).

2. Specifically regarding “WHERE x IN (…..)”, I always chunk those.  When we 
use IN with a list of values, we’re building an actual SQL string that becomes 
enormous.  This puts strain on the database’s query engine that is not 
optimized for SQL strings that are hundreds of thousands of characters long, 
and on some backends this size is limited; on Oracle, there’s a limit of 1000 
items.   So I’d always chunk this kind of thing.

3. I’m not sure of the broader context of this code, but in fact placing a 
literal list of items in the IN in this case seems unnecessary; the 
“vmis_to_remove” list itself was just SELECTed two lines above.   There’s some 
in-Python filtering following it which does not seem necessary; the 
alloc.vxlan_vni not in vxlan_vnis” phrase could just as well be a SQL “NOT IN” 
expression.  Not sure if determination of the “.allocated” flag can be done in 
SQL, if that’s a plain column, then certainly.Again not sure if this is 
just an artifact of how the test is done here, but if the goal is to optimize 
this code for speed, doing a DELETE…WHERE .. IN (SELECT ..) is probably better. 
  I see that the SELECT is using a lockmode, but it would seem that if just the 
rows we care to DELETE are inlined within the DELETE itself this wouldn’t be 
needed either.

It’s likely that everything in #3 is pretty obvious already and there’s reasons 
it’s the way it is, but I’m just learning all of these codebases so feel free 
to point out more of the background for me.   

4. The synchronize_session=“fetch” is certainly a huge part of the time spent 
here, and it seems unclear why this synchronize is necessary.  When I use 
query.delete() I never use “fetch”; I either have synchronization turned off, 
as the operation is not dealing with any set of objects already in play, or I 
use “evaluate” which here is not possible with the IN (though there is a 
SQLAlchemy ticket for many years to implement “evaluate” using IN (values) 
that is pretty easy to implement, but if the query became an IN (SELECT …)” 
that again would not be feasible).

5. I don’t have a great theory on why chunking does better here on the INSERT.  
 My vague notion here is that as with the DELETE, the systems in play do better 
when they aren’t tasked with building up very large internal buffers for 
operations, but that’s not something I have the background to prove.  

These are all just some impressions and as I’m totally new to this code base I 
may be way off, so please feel to help me get up to speed !

- mike


___

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-08 Thread Brandon Logan
I think that would defeat a big purpose for barbican; a user only has to
store their secret in one central location for reuse with many services.

Thanks,
Brandon

On Sun, 2014-06-08 at 09:05 +0400, Eugene Nikanorov wrote:
   If a user makes a change to a secret
 Can we just disable that by making LBaaS a separate user so it would
 store secrets under LBaaS 'fake' tenant id?
 
 
 Eugene.
 
 
 On Sun, Jun 8, 2014 at 7:29 AM, Jain, Vivek vivekj...@ebay.com
 wrote:
 +1 for #2.
 
 In addition, I think it would be nice if barbican maintains
 versioned data
 on updates. Which means consumer of barbican APIs can request
 for data
 from older version if needed. This can address concerns
 expressed by
 German. For example if certificates were updated on barbican
 but somehow
 update is not compatible with load balancer device, then lbaas
 API user
 gets an option to fall back to older working certificate. That
 will avoid
 downtime of lbaas managed applications.
 
 Thanks,
 Vivek
 
 On 6/6/14, 3:52 PM, Eichberger, German
 german.eichber...@hp.com wrote:
 
 Jorge + John,
 
 I am most concerned with a user changing his secret in
 barbican and then
 the LB trying to update and causing downtime. Some users like
 to control
 when the downtime occurs.
 
 For #1 it was suggested that once the event is delivered it
 would be up
 to a user to enable an auto-update flag.
 
 In the case of #2 I am a bit worried about error cases: e.g.
 uploading
 the certificates succeeds but registering the loadbalancer(s)
 fails. So
 using the barbican system for those warnings might not as
 fool proof as
 we are hoping.
 
 One thing I like about #2 over #1 is that it pushes a lot of
 the
 information to Barbican. I think a user would expect when he
 uploads a
 new certificate to Barbican that the system warns him right
 away about
 load balancers using the old cert. With #1 he might get an
 e-mails from
 LBaaS telling him things changed (and we helpfully updated
 all affected
 load balancers) -- which isn't as immediate as #2.
 
 If we implement an auto-update flag for #1 we can have
 both. User's who
 like #2 juts hit the flag. Then the discussion changes to
 what we should
 implement first and I agree with Jorge + John that this
 should likely be
 #2.
 
 German
 
 -Original Message-
 From: Jorge Miramontes
 [mailto:jorge.miramon...@rackspace.com]
 Sent: Friday, June 06, 2014 3:05 PM
 To: OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican
 Neutron LBaaS
 Integration Ideas
 
 Hey John,
 
 Correct, I was envisioning that the Barbican request would
 not be
 affected, but rather, the GUI operator or API user could use
 the
 registration information to do so should they want to do so.
 
 Cheers,
 --Jorge
 
 
 
 
 On 6/6/14 4:53 PM, John Wood john.w...@rackspace.com
 wrote:
 
 Hello Jorge,
 
 Just noting that for option #2, it seems to me that the
 registration
 feature in Barbican would not be required for the first
 version of this
 integration effort, but we should create a blueprint for it
 nonetheless.
 
 As for your question about services not
 registering/unregistering, I
 don't see an issue as long as the presence or absence of
 registered
 services on a Container/Secret does not **block** actions
 from
 happening, but rather is information that can be used to
 warn clients
 through their processes. For example, Barbican would still
 delete a
 Container/Secret even if it had registered services.
 
 Does that all make sense though?
 
 Thanks,
 John
 
 
 From: Youcef Laribi [youcef.lar...@citrix.com]
 Sent: Friday, June 06, 2014 2:47 PM
 To: OpenStack Development Mailing List (not for usage
 questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican
 Neutron LBaaS
 Integration Ideas
 
 +1 for option 2.
 
 In addition as an additional safeguard, the LBaaS service
 could check
 with Barbican 

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-08 Thread Clint Byrum
Excerpts from Eichberger, German's message of 2014-06-06 15:52:54 -0700:
 Jorge + John,
 
 I am most concerned with a user changing his secret in barbican and then the 
 LB trying to update and causing downtime. Some users like to control when the 
 downtime occurs.
 

Couldn't you allow a user to have multiple credentials, the way basically
every key based user access system works (for an example see SSH). Users
changing their credentials would create new ones, reference them in the
appropriate consuming service, and dereference old ones when they are
believed to be out of service.

I see both specified options as overly complicated attempts to work
around what would be solved gracefully with a many-to-one relationship
of keys to users.

 For #1 it was suggested that once the event is delivered it would be up to a 
 user to enable an auto-update flag.
 
 In the case of #2 I am a bit worried about error cases: e.g. uploading the 
 certificates succeeds but registering the loadbalancer(s) fails. So using the 
 barbican system for those warnings might not as fool proof as we are hoping. 
 
 One thing I like about #2 over #1 is that it pushes a lot of the information 
 to Barbican. I think a user would expect when he uploads a new certificate to 
 Barbican that the system warns him right away about load balancers using the 
 old cert. With #1 he might get an e-mails from LBaaS telling him things 
 changed (and we helpfully updated all affected load balancers) -- which isn't 
 as immediate as #2. 
 
 If we implement an auto-update flag for #1 we can have both. User's who 
 like #2 juts hit the flag. Then the discussion changes to what we should 
 implement first and I agree with Jorge + John that this should likely be #2.

IMO you're doing way too much and tending toward tight coupling which
will make the system brittle.

If you want to give the user orchestration, there is Heat. A template will
manage the sort of things that you want, such as automatic replacement
and dereferencing/deleting of older credentials. But not if your service
doesn't support having n+1 active credentials at one time.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-08 Thread Jaromir Coufal

Hi,

it looks that there is no more activity on the survey for mid-cycle 
dates so I went forward to evaluate it.


I created a table view into the etherpad [0] and results are following:
* option1 (Jul 28 - Aug 1): 27 attendees - collides with Nova/Ironic
* option2 (Jul 21-25) : 27 attendees
* option3 (Jul 25-29) : 17 attendees - collides with Nova/Ironic
* option4 (Aug 11-15) : 13 attendees

I think that we can remove options 3 and 4 from the consideration, 
because there is lot of people who can't make it. So we have option1 and 
option2 left. Since Robert and Devananda (PTLs on the projects) can't 
make option1, which also conflicts with Nova/Ironic meetup, I think it 
is pretty straightforward.


Based on the survey the winning date for the mid-cycle meetup is 
option2: July 21th - 25th.


Does anybody have very strong reason why we shouldn't fix the date for 
option2 and proceed forward with the organization for the meetup?


Thanks for all the interest
-- Jarda

[0] https://etherpad.openstack.org/p/juno-midcycle-meetup


On 2014/28/05 13:05, Jaromir Coufal wrote:

Hi to all,

after previous TripleO  Ironic mid-cycle meetup, which I believe was
beneficial for all, I would like to suggest that we meet again in the
middle of Juno cycle to discuss current progress, blockers, next steps
and of course get some beer all together :)

Last time, TripleO and Ironic merged their meetings together and I think
it was great idea. This time I would like to invite also Heat team if
they want to join. Our cooperation is increasing and I think it would be
great, if we can discuss all issues together.

Red Hat offered to host this event, so I am very happy to invite you all
and I would like to ask, who would come if there was a mid-cycle meetup
in following dates and place:

* July 28 - Aug 1
* Red Hat office, Raleigh, North Carolina

If you are intending to join, please, fill yourselves into this etherpad:
https://etherpad.openstack.org/p/juno-midcycle-meetup

Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] two confused part about Ironic

2014-06-08 Thread Jander lu
Hi Devananda

I have 16 compute nodes, as your suggestion (you should use host aggregates
to differentiate the nova-compute services configured to use different
hypervisor drivers (eg, nova.virt.libvirt vs nova.virt.ironic) .

(1)I  can set 4 of them with nova.virt.ironic(for bare metal provision) and
left 12 of them with nova.virt.libvirt (for VM provision), they can work
well together both for VM provision and Ironic provision ?  of course I
should use host aggregates to make 4 nodes as one aggregate and left 12
nodes as another aggregate.
(2 ) should I replace the nova sheduler? the default nova scheduler(Filter
Scheduler) can support this?



2014-06-06 15:45 GMT+08:00 Jander lu lhcxx0...@gmail.com:

 Hi Devananda

 I have 16 compute nodes, as your suggestion (you should use host
 aggregates to differentiate the nova-compute services configured to use
 different hypervisor drivers (eg, nova.virt.libvirt vs nova.virt.ironic)
 .

 (1)I  can set 4 of them with nova.virt.ironic(for bare metal provision)
 and left 12 of them with nova.virt.libvirt (for VM provision), they can
 work well together both for VM provision and Ironic provision ?  of course
 I should use host aggregates to make 4 nodes as one aggregate and left 12
 nodes as another aggregate.
 (2 ) should I replace the nova sheduler? the default nova scheduler(Filter
 Scheduler) can support this?


 2014-06-06 1:27 GMT+08:00 Devananda van der Veen devananda@gmail.com
 :

 There is documentation available here:
   http://docs.openstack.org/developer/ironic/deploy/install-guide.html

 On Thu, Jun 5, 2014 at 1:25 AM, Jander lu lhcxx0...@gmail.com wrote:
  Hi, Devvananda
 
  I searched a lot about the installation of Ironic, but there is little
  metarial about this,  there is only devstack with
  ironic(
 http://docs.openstack.org/developer/ironic/dev/dev-quickstart.html)
 
  is there any docs about how to deploy Ironic on production physical node
  enviroment?
 
  thx
 
 
 
  2014-05-30 1:49 GMT+08:00 Devananda van der Veen 
 devananda@gmail.com:
 
  On Wed, May 28, 2014 at 8:14 PM, Jander lu lhcxx0...@gmail.com
 wrote:
 
  Hi, guys, I have two confused part in Ironic.
 
 
 
  (1) if I use nova boot api to launch an physical instance, how does
 nova
  boot command differentiate whether VM or physical node provision?
 From this
  article, nova bare metal use PlacementFilter instead of
 FilterScheduler.so
  does Ironic use the same method?
  (
 http://www.mirantis.com/blog/baremetal-provisioning-multi-tenancy-placement-control-isolation/
 )
 
 
  That blog post is now more than three releases old. I would strongly
  encourage you to use Ironic, instead of nova-baremetal, today. To my
  knowledge, that PlacementFilter was not made publicly available. There
 are
  filters available for the FilterScheduler that work with Ironic.
 
  As I understand it, you should use host aggregates to differentiate the
  nova-compute services configured to use different hypervisor drivers
 (eg,
  nova.virt.libvirt vs nova.virt.ironic).
 
 
 
  (2)does Ironic only support Flat network? If not, how does Ironic
  implement tenant isolation in virtual network? say,if one tenant has
 two
  vritual network namespace,how does the created bare metal node
 instance send
  the dhcp request to the right namespace?
 
 
  Ironic does not yet perform tenant isolation when using the PXE driver,
  and should not be used in an untrusted multitenant environment today.
 There
  are other issues with untrusted tenants as well (such as firmware
 exploits)
  that make it generally unsuitable to untrusted multitenancy (though
  specialized hardware platforms may mitigate this).
 
  There have been discussions with Neutron, and work is being started to
  perform physical network isolation, but this is still some ways off.
 
  Regards,
  Devananda
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Is ironic support EXSI when boot a bare metal

2014-06-08 Thread LeslieWang
--
Hi Devananda,
Thanks for your reply. Your link shows how to create a VM through vSphere. What 
we are doing is to how to deploy vSphere to bare metal server, so that 
automation can be from installation, deployment, to configuration, to VM 
creation.
Hi Chris,
We do use diskimage-builder to create ubuntu image, kernel, initfs, and deploy 
ubuntu through Ironic API successful. However, seems like diskimage-builder 
doesn't support vmware, so we don't know how to extract vmware kernel, initfs 
from vmware image, such as 
http://partnerweb.vmware.com/programs/vmdkimage/debian-2.6.32-i686.vmdk. So 
wonder to know whether anyone has done this before.
Best RegardsLeslie
--
Hi Chao,
The ironic ssh driver does support vmware. See 
https://github.com/openstack/ironic/blob/master/ironic/drivers/modules/ssh.py#L69-L89.
 Have you seen the Triple-O tools, mainly Disk Image Builder 
(https://github.com/openstack/diskimage-builder). This is how I build images I 
use for testing. I have not tested the vmware parts of ironic as I do not have 
a vmware server to test with, others have tested it. 
Hope this helps.
Chris Krelle--NobodyCam

2014-06-06 1:31 GMT+08:00 Devananda van der Veen devananda@gmail.com:
ChaoYan,

Are you asking about using vmware as a test platform for developing
Ironic, or as a platform on which to run a production workload managed
by Ironic? I do not understand your question -- why would you use
Ironic to manage a VMWare cluster, when there is a separate Nova
driver specifically designed for managing vmware? While I am not
familiar with it, I believe more information may be found here:
  https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide

Best,
Devananda

On Thu, Jun 5, 2014 at 4:39 AM, 严超 yanchao...@gmail.com wrote:
 Hi, All:
 Is ironic support EXSI when boot a bare metal ? If we can, how to
 make vmware EXSI ami bare metal image ?

 Best Regards!
 Chao Yan
 --
 My twitter:Andy Yan @yanchao727
 My Weibo:http://weibo.com/herewearenow
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][DB] DB migration weekly meeting today (Monday) at 1300 UTC

2014-06-08 Thread Henry Gessau
The Neutron DB migration refactor weekly meeting[1] has moved to 1300 UTC on
Mondays. (I should have sent out this reminder earlier, sorry.)

Please review the spec[2] to see the changes in the design since last week.

[1] https://wiki.openstack.org/wiki/Meetings/NeutronDB
[2] https://review.openstack.org/95738


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev