Re: [openstack-dev] Building on Debian: Havana unit tests at build time report

2013-10-17 Thread Roman Podolyaka
Hi all,

Being a bit familiar with both SQLAlchemy and sqlalchemy-migrate, I decided
to check the issue with running of migrations tests in Nova with SQLAlchemy
0.8.x. Long story is here:
https://bugs.launchpad.net/sqlalchemy-migrate/+bug/1241038

TL;DR
1. It's really the issue with sqlalchemy-migrate and has nothing to do with
Nova.
2. sqlalchemy-migrate DOES seem to support SQLAlchemy 0.8.x, but, I
believe, we are going to face more and more similar issues.
3. It's downgrade, so it should be a minor issue for packaging of new
OpenStack release in Debian.
4. Everyone's life would be much easier, if we dropped migrations support
for SQLite. Alembic doesn't support ALTER for SQLite at all on purpose. And
I would really like us to switch to using of Alembic in the following
releases.

I'll try to fix this in sqlalchemy-migrate and maybe we could release the
fix later.

Thanks,
Roman


On Thu, Oct 17, 2013 at 6:31 PM, Thomas Goirand z...@debian.org wrote:

 On 10/17/2013 09:11 PM, Monty Taylor wrote:
  I understand what you are saying and I also understand your frustration.
  However, OpenStack does not, as of yet, support SQLAlchemy 0.8, and as
  you can see, the requirements file does, in fact, communicate reality,
  we depend on 0.8.

 It does, at the exception of this little issue when running in Sid with
 SQLite 3.

  I support the bug being fixed and the requirement being raised, but it
  has to happen in that order.

 Sure!

  So yes, we need a fix for the above problem, and Nova needs to support
  SQLAlchemy 0.8.2 which is what we have in Sid [1]. If I remember well,
  there's also the problem on RPM based systems. BTW, it looks like it is
  an isolated unique problem, and it seems to be the only one left in this
  release of OpenStack. It is also new (Nova Havana B3 didn't have the
  issue). So I really think it shouldn't be hard to fix the problem.
 
  Also, note that the issue isn't only in Debian, it's also a problem for
  Ubuntu 2013.10, which also has SQLAlchemy 0.8.2. [2]
 
  Fascinating. What have they done to make that work?

 I'm not sure what you are talking about, since I don't agree that
 OpenStack doesn't work with SQLAlchemy 0.8.2 (apart from this one bug).

  And why has none of
  that work made it into OpenStack so that we could raise the SQLAlchemy
  requirement?

 If you didn't know, I already have Grizzly (currently in Sid) working
 with SQLAlchemy 0.8.2... (I just backported from Havana to Grizzly a few
 patches from upstream, namely for Cinder and Heat IIRC.) These were
 reported by me during this summer, and fixed in July for all projects,
 and just right before Havana B3 for Heat.

  I really hope that for the next release cycle, the above will be taken
  into consideration, and that someone will come up with a fix quickly for
  this release. Maybe *now* is time to switch the gate to SQLAlchemy 0.8.2
  for the Icehouse branch?
 
  Can't change the req until it works - but yes, I agree with you, we
  clearly need to upgrade.

 Let's have that one fixed, and let's switch...

  Thanks for you help and attention Thomas! I appreciate all the work!!!

 My pleasure! :)

 Thomas


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Requirements syncing job is live

2013-10-02 Thread Roman Podolyaka
Hello ZhiQiang,

I'm not sure what HEADs you mean: oslo-incubator doesn't contain git
submodules, but rather regular Python packages.

On the other hand, oslo.version/oslo.messaging/oslo.* are separate
libraries, having their own releases, so syncing of global requirements
will effectively make projects use newer versions of those libs.

Thanks,
Roman


On Wed, Oct 2, 2013 at 5:02 AM, ZhiQiang Fan aji.zq...@gmail.com wrote:

 great job! thanks

 (how about auto sync from oslo too?
 - projects.txt: projects want to be automatically synced from oslo
 - heads.txt: HEAD for each module in oslo

 whenever module maintainer think current module is strong enough to
 publish, then he/she can edit the heads.txt of that module line, then
 jenkins will propose a sync patch for projects listed in projects.txt

 this behavior will be dangerous, since it may pass gate test when merge
 but cause internal bug which is not well test coverd)


 On Wed, Oct 2, 2013 at 1:27 AM, Monty Taylor mord...@inaugust.com wrote:

 Hey all!

 The job to automatically propose syncs from the openstack/requirements
 repo went live today - as I'm sure you all noticed, since pretty much
 everyone got a patch of at least some size.

 The job works the same way as the translations job - it will propose a
 patch any time the global repo changes - but if there is already an
 outstanding change that has not been merged, it will simply amend that
 change. So there should only ever be one change per branch per project
 in the topic openstack/requirements submitted by the jenkins user.

 If a change comes in and you say to yourself ZOMG, that version would
 break us - then you should definitely go and propose an update to the
 global list itself, which is in the global-requirements.txt file in the
 openstack/requirements repo.

 The design goal, as discussed at the last two summits, is that we should
 converge on alignment by the release at the very least. With this and
 the changes that exist now in the gate to block non-aligned
 requirements, once we get aligned, we shouldn't probably be too far out
 from each other moving forward.

 Additionally, the list of projects to receive updates is managed in a
 file, projects.txt, in the openstack/requirements repo. If you are
 running a project and would like to receive syncing patches, feel free
 to add yourself to the list.

 Enjoy!
 Monty

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 blog: zqfan.github.com
 git: github.com/zqfan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] configapplier licensing

2013-09-20 Thread Roman Podolyaka
Hi Thomas,

I believe all OpenStack projects (including diskimage-builder [1] and
os-apply-config [2]) are distributed under Apache license.

Thanks,
Roman

[1] https://github.com/openstack/diskimage-builder/blob/master/LICENSE
[2] https://github.com/openstack/os-apply-config/blob/master/LICENSE


On Fri, Sep 20, 2013 at 8:02 AM, Thomas Goirand z...@debian.org wrote:

 Hi,

 While trying to package diskimage-builder for Debian, I saw that in some
 files, it's written this file is release under the same license as
 configapplier. However, I haven't been able to find the license of
 configapplier anywhere.

 So, under which license is configapplier released? I need this
 information to populate the debian/copyright file before uploading to
 Sid (to pass the NEW queue).

 Cheers,

 Thomas Goirand (zigo)

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Backwards incompatible migration changes - Discussion

2013-09-12 Thread Roman Podolyaka
I can't agree more with Robert.

Even if it was possible to downgrade all migrations without data loss, it
would be required to make backups before DB schema upgrade/downgrade.

E.g. MySQL doesn't support transactional DDL. So if a migration script
can't be executed successfully for whatever reason (let's say we haven't
tested it well enough on real data and it's turned out it has a few bugs),
you will end up in a situation when the migration is partially applied...
And migrations can possibly fail before backup tables are created, during
this process or after it.

Thanks,
Roman


On Thu, Sep 12, 2013 at 8:30 AM, Robert Collins
robe...@robertcollins.netwrote:

 I think having backup tables adds substantial systematic complexity,
 for a small use case.

 Perhaps a better answer is to document in 'take a backup here' as part
 of the upgrade documentation and let sysadmins make a risk assessment.
 We can note that downgrades are not possible.

 Even in a public cloud doing trunk deploys, taking a backup shouldn't
 be a big deal: *those* situations are where you expect backups to be
 well understood; and small clouds don't have data scale issues to
 worry about.

 -Rob

 -Rob

 On 12 September 2013 17:09, Joshua Hesketh joshua.hesk...@rackspace.com
 wrote:
  On 9/4/13 6:47 AM, Michael Still wrote:
 
  On Wed, Sep 4, 2013 at 1:54 AM, Vishvananda Ishaya
  vishvana...@gmail.com wrote:
 
  +1 I think we should be reconstructing data where we can, but keeping
  track of
  deleted data in a backup table so that we can restore it on a downgrade
  seems
  like overkill.
 
  I guess it comes down to use case... Do we honestly expect admins to
  regret and upgrade and downgrade instead of just restoring from
  backup? If so, then we need to have backup tables for the cases where
  we can't reconstruct the data (i.e. it was provided by users and
  therefore not something we can calculate).
 
 
  So assuming we don't keep the data in some kind of backup state is there
 a
  way we should be documenting which migrations are backwards incompatible?
  Perhaps there should be different classifications for data-backwards
  incompatible and schema incompatibilities.
 
  Having given it some more thought, I think I would like to see migrations
  keep backups of obsolete data. I don't think it is unforeseeable that an
  administrator would upgrade a test instance (or less likely, a
 production)
  by accident or not realising their backups are corrupted, outdated or
  invalid. Being able to roll back from this point could be quite useful. I
  think potentially more useful than that though is that if somebody ever
  needs to go back and look at some data that would otherwise be lost it is
  still in the backup table.
 
  As such I think it might be good to see all migrations be downgradable
  through the use of backup tables where necessary. To couple this I think
 it
  would be good to have a standard for backup table naming and maybe schema
  (similar to shadow tables) as well as an official list of backup tables
 in
  the documentation stating which migration they were introduced and how to
  expire them.
 
  In regards to the backup schema, it could be exactly the same as the
 table
  being backed up (my preference) or the backup schema could contain just
 the
  lost columns/changes.
 
  In regards to the name, I quite like backup_table-name_migration_214.
 The
  backup table name could also contain a description of what is backed up
 (for
  example, 'uuid_column').
 
  In terms of expiry they could be dropped after a certain release/version
 or
  left to the administrator to clear out similar to shadow tables.
 
  Thoughts?
 
  Cheers,
  Josh
 
  --
  Rackspace Australia
 
 
  Michael
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][oslo] mysql, sqlalchemy and sql_mode

2013-09-11 Thread Roman Podolyaka
Hi Steven,

Nice catch! This is not the first time MySQL has played a joke on us...

I think, we can fix this easily by adding a callback function, which will
set the proper sql_mode value, when a DB connection is retrieved from a
connection pool.

We'll provide a fix to oslo-incubator soon.

Thanks,
Roman

[1] http://www.enricozini.org/2012/tips/sa-sqlmode-traditional/


On Wed, Sep 11, 2013 at 1:37 PM, Steven Hardy sha...@redhat.com wrote:

 Hi all,

 I'm investigating some issues, where data stored to a text column in mysql
 is silently truncated if it's too big.

 It appears that the default configuration of mysql, and the sessions
 established via sqlalchemy is to simply warn on truncation rather than
 raise an error.

 This seems to me to be almost never what you want, since on retrieval the
 data is corrupt and bad/unexpected stuff is likely.

 This AFAICT is a mysql specific issue[1], which can be resolved by setting
 sql_mode to traditional[2,3], after which an error is raised on
 truncation,
 allowing us to catch the error before the data is stored.

 My question is, how do other projects, or oslo.db, handle this atm?

 It seems we either have to make sure the DB enforces the schema/model, or
 validate every single value before attempting to store, which seems like an
 unreasonable burden given that the schema changes pretty regularly.

 Can any mysql, sqlalchemy and oslo.db experts pitch in with opinions on
 this?

 Thanks!

 Steve

 [1] http://www.enricozini.org/2012/tips/sa-sqlmode-traditional/
 [2]
 http://rpbouman.blogspot.co.uk/2009/01/mysqls-sqlmode-my-suggestions.html
 [3] http://dev.mysql.com/doc/refman/5.5/en/server-sql-mode.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Baremetal][DB] Core review request: bugfix for 1221620

2013-09-09 Thread Roman Podolyaka
Hi,

There is a patch on review (https://review.openstack.org/#/c/45422/) fixing
https://bugs.launchpad.net/tripleo/+bug/1221620 which has importance
'Critical' in Nova and TripleO (long story short: currently Nova Baremetal
deployments with more than one baremetal node won't work).

It would be really nice to have this patch reviewed by core developers, so
we can fix the bug ASAP.

Thanks,
Roman
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Baremetal][DB] Core review request: bugfix for 1221620

2013-09-09 Thread Roman Podolyaka
Hi,

I'm ok with both accepting this patch and reverting the commit, which
introduced the regression, but it would be really nice to have these DB
optimizations in Nova.

As for your concern of accepting such optimizations. I don't think, it's a
problem of such patches themselves, but rather with the lack of
comprehensive tests of complex OpenStack installations in our CI (at the
same time I personally believe our CI is the best thing ever happened to
OpenStack :), CI team you really rock!).

Anyway, TripleO-CI found this regression. Maybe we should consider adding
its job to Nova check/gate pipelines?

Thanks,
Roman


On Mon, Sep 9, 2013 at 1:59 PM, Nikola Đipanov ndipa...@redhat.com wrote:

 On 09/09/13 11:25, Roman Podolyaka wrote:
  Hi,
 
  There is a patch on review (https://review.openstack.org/#/c/45422/)
  fixing https://bugs.launchpad.net/tripleo/+bug/1221620 which has
  importance 'Critical' in Nova and TripleO (long story short: currently
  Nova Baremetal deployments with more than one baremetal node won't work).
 
  It would be really nice to have this patch reviewed by core developers,
  so we can fix the bug ASAP.
 

 Hey - thanks for responding quickly - I commented on the patch and tbh I
 am starting to be -1 on this due to issues mentioned on the review.

 I will accept that my take on this is too conservative :) and remove a
 -1 if needed to get this in, but at this point, I have some doubts
 weather this is the right approach.

 Cheers,

 N.


  Thanks,
  Roman
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Baremetal][DB] Core review request: bugfix for 1221620

2013-09-09 Thread Roman Podolyaka
Robert,

Cool! That would be really nice! nova-bm functional tests are the bare
minimum we need.

Thanks,
Roman


On Mon, Sep 9, 2013 at 2:27 PM, Robert Collins robe...@robertcollins.netwrote:

 We deseperately want TripleO in the gate and are working towards it.
 However, doing that acceptably fast makes it non-trivial [nested VM's
 are kindof slow...]. If there was BYOI in RackSpace and in HP's
 non-Beta regions, and custom networking in both clouds, then we could
 avoid a layer of nesting and test the different TripleO layers
 independently (and concurrently) but sadly thats not the case yet, so
 we're having to work on tuning and the occasional hack.

 We hope to have at least functional tests for nova-bm in soon, with
 TripleO as a whole following on subsequently to that.

 -Rob

 On 9 September 2013 23:12, Roman Podolyaka rpodoly...@mirantis.com
 wrote:
  Hi,
 
  I'm ok with both accepting this patch and reverting the commit, which
  introduced the regression, but it would be really nice to have these DB
  optimizations in Nova.
 
  As for your concern of accepting such optimizations. I don't think, it's
 a
  problem of such patches themselves, but rather with the lack of
  comprehensive tests of complex OpenStack installations in our CI (at the
  same time I personally believe our CI is the best thing ever happened to
  OpenStack :), CI team you really rock!).
 
  Anyway, TripleO-CI found this regression. Maybe we should consider adding
  its job to Nova check/gate pipelines?
 
  Thanks,
  Roman
 
 
  On Mon, Sep 9, 2013 at 1:59 PM, Nikola Đipanov ndipa...@redhat.com
 wrote:
 
  On 09/09/13 11:25, Roman Podolyaka wrote:
   Hi,
  
   There is a patch on review (https://review.openstack.org/#/c/45422/)
   fixing https://bugs.launchpad.net/tripleo/+bug/1221620 which has
   importance 'Critical' in Nova and TripleO (long story short: currently
   Nova Baremetal deployments with more than one baremetal node won't
   work).
  
   It would be really nice to have this patch reviewed by core
 developers,
   so we can fix the bug ASAP.
  
 
  Hey - thanks for responding quickly - I commented on the patch and tbh I
  am starting to be -1 on this due to issues mentioned on the review.
 
  I will accept that my take on this is too conservative :) and remove a
  -1 if needed to get this in, but at this point, I have some doubts
  weather this is the right approach.
 
  Cheers,
 
  N.
 
 
   Thanks,
   Roman
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack + PyPy: Status and goals

2013-09-09 Thread Roman Podolyaka
Hi Alex,

That's really cool! I believe, performance is not the only benefit we can
get from running OpenStack projects on PyPy. We  can also improve the
overall correctness of our code (as PyPy behaves differently with
non-closed files, etc), just like compiling of your C/C++ app using
different compilers can show hidden errors.

And what about eventlet? Does it work well on PyPy? (as it is used in Nova,
Neutron, etc)

Thanks,
Roman


On Tue, Sep 10, 2013 at 12:28 AM, Alex Gaynor alex.gay...@gmail.com wrote:

 Hi all,

 Many of you have probably seen me send review requests in the last few
 weeks
 about adding PyPy support to various OpenStack projects. A few people were
 confused by these, so I wanted to fill everyone in on what I'm up to :)

 First, for those who aren't familiar with what PyPy is: PyPy is an
 implementation of the Python language which includes a high performance
 tracing
 just-in-time compiler and which is faster than CPython (the reference, and
 most
 widely deployed, implementation) on almost all workloads.

 The current status is:

 Two major projects work, both Marconi and Swift, Marconi is gating against
 PyPy
 already, Swift isn't yet since I needed to fix a few small PyPy bugs and
 those
 aren't in a release yet, expect it soon :)

 In terms of results, I've observed 30% performance improvements on GET
 workloads for Swift under PyPy vs. CPython (other workloads haven't been
 benchmarked tet). I believe the Marconi folks have also observed some
 performance wins, but I'll let them speak to that, I don't have the full
 details.

 Many python-clients projects are also working out of the box and gating:
 including novaclient, swiftclient, marconiclient, ceilometerclient,
 heatclient,
 and ironicclient!

 There's a few outstanding reviews to add PyPy gating for cinderclient,
 troveclient, and glanceclient.

 In terms of future direction:

 I'm going to continue to work on getting more projects running and gating
 against PyPy.

 Right now I'm focusing a lot of my attention on improving Swift
 performance,
 particularly under PyPy, but also under CPython.

 I'm hoping some day PyPy will be the default way to deploy OpenStack!


 If you're interested in getting your project running on PyPy, or looking at
 performance under it, please let me know, I'm always interested in helping!

 Thanks,
 Alex

 --
 I disapprove of what you say, but I will defend to the death your right
 to say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
 The people's good is the highest law. -- Cicero
 GPG Key fingerprint: 125F 5C67 DFE9 4084

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] headsup - transient test failures on py26 ' cannot import name OrderedDict'

2013-07-19 Thread Roman Podolyaka
Hi guys,

Both 0.0.16 and 0.0.17 seem to have a broken tests counter. It shows that 2
times more tests have been run than I actually have.

Thanks,
Roman


On Thu, Jul 18, 2013 at 2:29 AM, David Ripton drip...@redhat.com wrote:

 On 07/17/2013 04:54 PM, Robert Collins wrote:

 On 18 July 2013 08:48, Chris Jones c...@tenshu.net wrote:

 Hi

 On 17 July 2013 21:27, Robert Collins robe...@robertcollins.net wrote:


 Surely thats fixable by having a /opt/ install of Python2.7 built for
 RHEL
 ? That would make life s much easier for all concerned, and is super



 Possibly not easier for those tasked with keeping OS security patches up
 to
 date, which is part of what a RHEL customer is paying Red Hat a bunch of
 money to do.


 I totally agree, which is why it would make sense for Red Hat to
 supply the build of Python 2.7 :).


 FYI,

 http://developerblog.redhat.**com/2013/06/05/red-hat-**
 software-collections-1-0-beta-**now-available/http://developerblog.redhat.com/2013/06/05/red-hat-software-collections-1-0-beta-now-available/

 (TL;DR : Red Hat Software Collections is a way to get newer versions of
 Python and some other software on RHEL 6.  It's still in beta though.)

 --
 David Ripton   Red Hat   drip...@redhat.com


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ipdb debugging in Neutron test cases?

2013-07-17 Thread Roman Podolyaka
Hi,

Indeed, stable/grizzly contains the following code in the base test case
class (quantum/tests/base.py):

if os.environ.get('OS_STDOUT_NOCAPTURE') not in TRUE_STRING:
stdout = self.useFixture(fixtures.StringStream('stdout')).stream
self.useFixture(fixtures.MonkeyPatch('sys.stdout', stdout))

so stdout is captured by default, and you should use OS_STDOUT_NOCAPTURE=1
instead.

The behavior was changed in this commit
https://github.com/openstack/neutron/commit/91bd4bbaeac37d12e61c9c7b033f55ec9f1ab562
.

Thanks,
Roman


On Wed, Jul 17, 2013 at 8:44 AM, Qiu Yu unic...@gmail.com wrote:

 On Wed, Jul 17, 2013 at 12:00 PM, Roman Podolyaka
 rpodoly...@mirantis.com wrote:
  Hi,
 
  Ensure that stdout isn't captured by the corresponding fixture:
 
  OS_STDOUT_CAPTURE=0 python -m testtools.run
 
 neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
  Tests running...

 Thanks Roman, ipdb works fine with test cases in Neutron master
 branch. And if you run 'python -m testtools.run {testcase}', stdout is
 not captured by default.

 However, the issue still exists with Neutron stable/grizzly branch,
 even with OS_STDOUT_CAPTURE=0. Not quite sure which change in trunk
 resolved this issue.

 Thanks,
 --
 Qiu Yu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ipdb debugging in Neutron test cases?

2013-07-16 Thread Roman Podolyaka
Hi,

Ensure that stdout isn't captured by the corresponding fixture:

OS_STDOUT_CAPTURE=0 python -m testtools.run
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
Tests running...

/home/rpodolyaka/src/neutron/neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py(251)test_port_update()
250
-- 251 with contextlib.nested(
252 mock.patch.object(self.agent.int_br,
get_vif_port_by_id),


OS_STDOUT_CAPTURE=1 python -m testtools.run
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
Tests running...
==
ERROR:
neutron.tests.unit.openvswitch.test_ovs_neutron_agent.TestOvsNeutronAgent.test_port_update
--
Empty attachments:
  pythonlogging:''
  stdout

Traceback (most recent call last):
  File neutron/tests/unit/openvswitch/test_ovs_neutron_agent.py, line
248, in test_port_update
import ipdb

()

AttributeError: '_io.BytesIO' object has no attribute 'name'

Thanks,
Roman


On Wed, Jul 17, 2013 at 5:58 AM, Qiu Yu unic...@gmail.com wrote:

 Hi,

 I'm wondering is there any one ever tried using ipdb in Neutron test
 cases? The same trick that used to be working with Nova, cannot be
 applied in Neutron.

 For example, you can trigger one specific test case. But once ipdb
 line is added, following exception will be raised from ipython.

 Any thoughts? How can I make ipdb work with Neutron test case? Thanks!

 $ source .venv/bin/activate
 (.venv)$ python -m testtools.run

 quantum.tests.unit.openvswitch.test_ovs_quantum_agent.TestOvsQuantumAgent.test_port_update

 ==
 ERROR:
 quantum.tests.unit.openvswitch.test_ovs_quantum_agent.TestOvsQuantumAgent.test_port_update
 --
 Empty attachments:
   pythonlogging:''
   stderr
   stdout

 Traceback (most recent call last):
   File quantum/tests/unit/openvswitch/test_ovs_quantum_agent.py,
 line 163, in test_port_update
 from ipdb import set_trace; set_trace()
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/ipdb/__init__.py,
 line 16, in module
 from ipdb.__main__ import set_trace, post_mortem, pm, run,
 runcall, runeval, launch_ipdb_on_exception
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/ipdb/__main__.py,
 line 26, in module
 import IPython
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/__init__.py,
 line 43, in module
 from .config.loader import Config
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/__init__.py,
 line 16, in module
 from .application import *
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/application.py,
 line 31, in module
 from IPython.config.configurable import SingletonConfigurable
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/configurable.py,
 line 26, in module
 from loader import Config
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/config/loader.py,
 line 27, in module
 from IPython.utils.path import filefind, get_ipython_dir
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/path.py,
 line 25, in module
 from IPython.utils.process import system
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/process.py,
 line 27, in module
 from ._process_posix import _find_cmd, system, getoutput, arg_split
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/_process_posix.py,
 line 27, in module
 from IPython.utils import text
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/text.py,
 line 29, in module
 from IPython.utils.io import nlprint
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/io.py,
 line 78, in module
 stdout = IOStream(sys.stdout, fallback=devnull)
   File
 /opt/stack/quantum/.venv/local/lib/python2.7/site-packages/IPython/utils/io.py,
 line 42, in __init__
 setattr(self, meth, getattr(stream, meth))
 AttributeError: '_io.BytesIO' object has no attribute 'name'


 --
 Qiu Yu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][Nova] Running Nova DB API tests on different backends

2013-06-21 Thread Roman Podolyaka
Hello Sean, all,

Currently there are ~30 test classes in DB API tests, containing ~370 test
cases. setUpClass()/tearDownClass() would be definitely an improvement, but
applying of all DB schema migrations for MySQL 30 times is going to take a
long time...

Thanks,
Roman


On Fri, Jun 21, 2013 at 3:02 PM, Sean Dague s...@dague.net wrote:

 On 06/21/2013 07:40 AM, Roman Podolyaka wrote:

 Hi, all!

 In Nova we've got a DB access layer known as DB API and tests for it.
 Currently, those tests are run only for SQLite in-memory DB, which is
 great for speed, but doesn't allow us to spot backend-specific errors.

 There is a blueprint
 (https://blueprints.launchpad.**net/nova/+spec/db-api-tests-**
 on-all-backendshttps://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
 )
 by Boris Pavlovic, which goal is to run the DB API tests on all DB
 backends (e. g. SQLite, MySQL and PosgreSQL). Recently, I've been
 working on implementation of this BP
 (https://review.openstack.org/**#/c/33236/https://review.openstack.org/#/c/33236/
 ).

 The chosen approach for implementing this is best explained by going
 through a list of problems which must be solved:

 1. Tests should be executed concurrently by testr.

 testr creates a few worker processes each running a portion of test
 cases. When SQLite in-memory DB is used for testing, each of those
 processes has it's own DB in its address space, so no race conditions
 are possible. If we used a shared MySQL/PostgreSQL DB, the test suite
 would fail due to various race conditions. Thus, we must create a
 separate DB for each of test running processes and drop those, when all
 tests end.

 The question is, where we should create/drop those DBs? There are a few
 possible places in our code:
 1) setUp()/tearDown() methods of test cases. These are executed for
 each test case (there are ~370 tests in test_db_api). So it must be a
 bad idea to create/apply migrations/drop DB 370 times, if MySQL or
 PostgreSQL are used instead of SQLite in-memory DB
 2) testr supports creation of isolated test environments
 (https://testrepository.**readthedocs.org/en/latest/**
 MANUAL.html#remote-or-**isolated-test-environmentshttps://testrepository.readthedocs.org/en/latest/MANUAL.html#remote-or-isolated-test-environments
 ).
 Long story short: we can specify commands to execute before tests are
 run, after test have ended and how to run tests
  3) module/package level setUp()/tearDown(), but these are probably
 supported only in nosetest


 How many Classes are we talking about? We're actually going after a
 similar problem in Tempest that setUp isn't cheap, so Matt Treinish has an
 experimental patch to testr which allows class level partitioning instead.
 Then you can use setupClass / teardownClass for expensive resource setup.


  So:
 1) before tests are run, a few test DBs are created (the number of
 created DBs is equal to the used concurrency level value)
 2) for each test running process an env variable, containing the
 connection string to the created DB, is set;
 3) after all test running processes have ended, the created DBs are
 dropped.

 2. Tests cleanup should be fast.

 For SQLite in-memory DB we use create DB/apply migrations/run test/drop
 DB pattern, but that would be too slow for running tests on MySQL or
 PostgreSQL.

 Another option would be to create DB only once for each of test running
 processes, apply DB migrations and then run each test case within a DB
 transaction which is rolled back after a test ends. Combining with
 something like fsync = off option of PostgreSQL this approach works
 really fast (on my PC it takes ~5 s to run DB API tests on SQLite and
 ~10 s on PostgreSQL).


 I like the idea of creating a transaction in setup, and triggering
 rollback in teardown, that's pretty clever.


  3. Tests should be easy to run for developers as well as for Jenkins.

 DB API tests are the only tests which should be run on different
 backends. All other test cases can be run on SQLite. The convenient way
 to do this is to create a separate tox env, running only DB API tests.
 Developers specify the DB connection string which effectively defines
 the backend that should be used for running tests.

 I'd rather not run those tests 'opportunistically' in py26 and py27 as
 we do for migrations, because they are going to be broken for some time
 (most problems are described here
 https://docs.google.com/a/**mirantis.com/document/d/**1H82lIxd54CRmy-**
 22oPRUS1sBkEtiguMU8N0whtye-BE/**edithttps://docs.google.com/a/mirantis.com/document/d/1H82lIxd54CRmy-22oPRUS1sBkEtiguMU8N0whtye-BE/edit
 ).
 So it would be really nice to have a separate non-voting gate test.


 Seperate tox env is the right approach IMHO, that would let it run
 issolated non-voting until we get to the bottom of the issues. For
 simplicity I'd still use the opportunistic db user / pass, as that will
 mean it could run upstream today.

 -Sean

 --
 Sean Dague
 http