[openstack-dev] [oslo.db] Stepping down from core

2017-06-11 Thread Roman Podoliaka
Hi all,

I recently changed job and hasn't been able to devote as much time to
oslo.db as it is expected from a core reviewer. I'm no longer working
on OpenStack, so you won't see me around much.

Anyway, it's been an amazing experience to work with all of you! Best
of luck! And see ya at various PyCon's around the world! ;)

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][requirements][all] requesting assistance to unblock SQLAlchemy 1.1 from requirements

2017-03-15 Thread Roman Podoliaka
Isn't the purpose of that specific job -
gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata - to test a
change to the library master branch with stable releases (i.e. Ocata)
- of all other components?

On Wed, Mar 15, 2017 at 5:20 PM, Sean Dague  wrote:
> On 03/15/2017 10:38 AM, Mike Bayer wrote:
>>
>>
>> On 03/15/2017 07:30 AM, Sean Dague wrote:
>>>
>>> The problem was the original patch kept a cap on SQLA, just moved it up
>>> to the next pre-release, not realizing the caps in general are the
>>> concern by the requirements team. So instead of upping the cap, I just
>>> removed it entirely. (It also didn't help on clarity that there was a
>>> completely unrelated fail in the tests which made it look like the
>>> system was stopping this.)
>>>
>>> This should hopefully let new SQLA releases very naturally filter out to
>>> all our services and libraries.
>>>
>>> -Sean
>>>
>>
>> so the failure I'm seeing now is *probably* one I saw earlier when we
>> tried to do this, the tempest run fails on trying to run a keystone
>> request, but I can't find the same error in the logs this time.
>>
>> In an earlier build of https://review.openstack.org/#/c/423192/, we saw
>> this:
>>
>> ContextualVersionConflict: (SQLAlchemy 1.1.5
>> (/usr/local/lib/python2.7/dist-packages),
>> Requirement.parse('SQLAlchemy<1.1.0,>=1.0.10'), set(['oslo.db',
>> 'keystone']))
>>
>> stack trace was in the apache log:  http://paste.openstack.org/show/601583/
>>
>>
>> but now on our own oslo.db build, the same jobs are failing and are
>> halting at keystone, but I can't find any error:
>>
>> the failure is:
>>
>>
>> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/
>>
>>
>> and is on:  https://review.openstack.org/#/c/445930/
>>
>>
>> if someone w/ tempest expertise could help with this that would be great.
>
> It looks like oslo.db master is being used with ocata services?
> http://logs.openstack.org/30/445930/1/check/gate-tempest-dsvm-neutron-src-oslo.db-ubuntu-xenial-ocata/815962d/logs/devstacklog.txt.gz#_2017-03-15_13_10_52_434
>
>
> I suspect that's the root issue. That should be stable/ocata branch, right?
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] experimenting with extracting placement

2017-03-14 Thread Roman Podoliaka
Hi Matt,

On Tue, Mar 14, 2017 at 5:27 PM, Matt Riedemann  wrote:
> We did agree to provide an openstackclient plugin purely for CLI
> convenience. That would be in a separate repo, not part of nova or
> novaclient. I've started a blueprint [1] for tracking that work. *The
> placement osc plugin blueprint does not currently have an owner.* If this is
> something someone is interested in working on, please let me know.
>
> [1] https://blueprints.launchpad.net/nova/+spec/placement-osc-plugin

I'll be glad to help with this!

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_reports]Does GMR work for uwsgi mode service?

2017-03-07 Thread Roman Podoliaka
Hi,

My understanding is that it's not recommended for WSGI apps to set up
custom signal handlers. The reason for that is that a WSGI server
(i.e. uwsgi in your case or Apache+mod_wsgi) will most likely have its
own handlers for the very same set of signals [1].

There is an alternative way to trigger a generation of a report by
changing a file modification date [2].

Thanks,
Roman

[1] 
https://code.google.com/archive/p/modwsgi/wikis/ConfigurationDirectives.wiki#WSGIRestrictSignal
[2] https://review.openstack.org/#/c/260976/

On Tue, Mar 7, 2017 at 8:02 AM, hao wang  wrote:
> Hi, stackers,
>
> I'm trying to use Guru Meditation Report in Zaqar project which can
> support uwsgi server.  I imported gmr and used
> "gmr.TextGuruMeditation.setup_autorun(version, conf=conf)",  but it
> didn't work under uwsgi mode.
>
> Did I do something wrong,  or  GMR doesn't support uwsgi mode yet?
>
> Thanks for your help!
>
> Wang Hao
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Eventlet verion bump coming?

2017-03-01 Thread Roman Podoliaka
Hi Tony,

I'm ready to help with this!

The version we use now (0.19.0) has (at least) 2 known issues:

- recv_into() >8kb from an SSL wrapped socket hangs [1]
- adjusting of system clock backwards makes periodic tasks hang [2]

so it'd be great to allow for newer releases in upper-constraints.

Thanks,
Roman

[1] https://github.com/eventlet/eventlet/issues/315
[2] https://review.openstack.org/#/c/434327/

On Tue, Feb 14, 2017 at 6:57 AM, Tony Breeds  wrote:
> Hi All,
> So there is a new version of eventlet out and we refused to bump it late 
> in
> the ocata cycle but now that we're early in the pike cycle I think we're okay
> to do it.  The last time[1] we tried to bump eventlet it was pretty rocky and 
> we
> decided that we'd need a short term group of people focused on testing the new
> bump rather than go through the slightly painful:
>
>  1: Bump eventlet version
>  2: Find and file bugs
>  3: Revert
>  4: Wait for next release
>  goto 1
>
> process.  So can we get a few people together to map this out?  I'd like to 
> try it
> shortly after the PTG?
>
> From an implementation POV I'd like to bump the upper-constraint and let that
> sit for a while before we touch global-requirements.txt
>
> Youre Tony.
>
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/thread.html#86745
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.db] MySQL Cluster support

2017-02-03 Thread Roman Podoliaka
On Fri, Feb 3, 2017 at 4:41 PM, Mike Bayer  wrote:
> Not sure if people on the list are seeing that we are simultaneously talking
> about getting rid of Postgresql in the efforts to support only "one
> database", while at the same time adding one that is in many ways a new
> database.

++

and, FWIW, moving columns between tables and changing of column types
in order to make NDB storage engine happy both seem to be a way more
intrusive than what we've had to do so far in the code of OpenStack
projects in order to support PostgreSQL.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Different length limit for tags in object definition and db model definition

2017-01-17 Thread Roman Podoliaka
Hi all,

Changing the type of column from VARCHAR(80) to VARCHAR(60) would also
require a data migration (i.e. a schema migration to add a new column
with the "correct" type, changes to the object, data migration logic)
as it is not an "online" DDL operation according to [1].  Adding a new
API microversion seems to be easier.

Thanks,
Roman

[1] 
https://dev.mysql.com/doc/refman/5.7/en/innodb-create-index-overview.html#innodb-online-ddl-column-properties

On Tue, Jan 17, 2017 at 10:19 AM, Sergey Nikitin  wrote:
> Hi, Zhenyu!
>
> I think we should ask DB guys about migration. But my personal opinion is
> that DB migration is much painful than new microversion.
>
>>  But it seems too late to have a microversion for this cycle.
>
>
> Correct me if I'm wrong but I thought that Feature Freeze will be in action
> Jan 26.
> https://wiki.openstack.org/wiki/Nova/Ocata_Release_Schedule
>
> Even if we need a new microversion I think it will be a specless
> microversion and patch will change about 5 lines of code. We can merge such
> patch in one day.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Lets make libvirt's domain XML canonical

2016-10-03 Thread Roman Podoliaka
Timofei,

On Mon, Oct 3, 2016 at 10:11 AM, Timofei Durakov  wrote:
> Hi team,
> Taking that into account, the
> question here would be:  why not to store all required information(e.g. boot
> order) in DB instead?

I think, we definitely could do that, just like we currently preserve
the order of NICs on reboot. I'm not sure it fully resolves Matt's
concerns, though, when it comes to preserving PCI device addresses. If
we were to keep the Nova DB as our source of truth, then we'd probably
need the libvirt driver to generate these addresses on creation of a
domain in a consistent fashion (assuming we store the order of devices
in the DB).

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][db] lazy loading of an attribute impossible

2016-09-30 Thread Roman Podoliaka
Michał,

You are absolutely right: this exception is raised when you try to
lazy-load instance attributes outside a Session scope. There is an
obvious problem with that - instances do not communicate with a DB on
their own - it's left up to Session [1].

Unfortunately, it does not play nicely with the "classic" DB access
layer we have in Cinder and other projects, when you have a notion of
pluggable DB APIs and SQLAlchemy implementation that looks like:

@require_context
@handle_db_data_error
def snapshot_create(context, values):
values['snapshot_metadata'] = _metadata_refs(values.get('metadata'),
 models.SnapshotMetadata)
if not values.get('id'):
values['id'] = str(uuid.uuid4())

session = get_session()
with session.begin():
snapshot_ref = models.Snapshot()
snapshot_ref.update(values)
session.add(snapshot_ref)

return _snapshot_get(context, values['id'], session=session)

In this case a Session (and transaction) scope is bound to "public" DB
API functions. There are a few problems with this:

1) once a public DB function returns an instance, it becomes prone to
lazy-load errors, as the corresponding session (and DB transaction) is
already gone and it's not possible to load missing data (without
establishing a new session/transaction)

2) you have to carefully pass a Session object when doing calls to
"private" DB API functions to ensure they all participate in the very
same DB transaction. Otherwise snapshot_get() above would not see the
row created by snapshot_create() due to isolation of transactions in
RDBMS

3) if you do multiple calls to "public" DB API functions when handling
a single HTTP request it's not longer easy to do a rollback as every
function creates its own DB transaction

Mixing of Session objects creation with the actual business logic is
considered to be an anti-pattern in SQLAlchemy [2] due to problems
mentioned above.

At this point I suggest you take a look at [3] and start using in
Cinder: in Kilo we did a complete redesign of EngineFacade in oslo.db
- it won't solve all you problems with lazy-loading automatically, but
what it can do is provide a tool for declarative definition of session
(and transaction) scope, so that it's not longer limited to one
"public" DB API function and you can extend it when needed: you no
longer create a Session object explicitly, but rather mark methods
with a decorator, that will inject a session into the context, and all
callees will participate in the established session (thus, DB
transaction) rather than create a new one (my personal opinion is that
for web-services it's preferable to bind session/transaction scope to
the scope of one HTTP request, so that it's easy to roll back changes
on errors - we are not there yet, but some projects like Nova are
already moving the session scope up the stack, e.g. to objects layer).

Thanks,
Roman

[1] 
http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#what-does-the-session-do
[2] 
http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#when-do-i-construct-a-session-when-do-i-commit-it-and-when-do-i-close-it
[3] 
https://specs.openstack.org/openstack/oslo-specs/specs/kilo/make-enginefacade-a-facade.html

On Thu, Sep 22, 2016 at 4:45 PM, Michał Dulko  wrote:
> Hi,
>
> I've just noticed another Cinder bug [1], similar to past bugs [2], [3].
> All of them have a common exception causing them:
>
> sqlalchemy.orm.exc.DetachedInstanceError: Parent instance
> <{$SQLAlchemyObject} at {$MemoryLocation}> is not bound to a Session;
> lazy load operation of attribute '{$ColumnName}' cannot proceed
>
> We've normally fixed them by simply making the $ColumnName eager-loaded,
> but as there's another similar bug report, I'm starting to think that we
> have some issue with how we're managing our DB connections and
> SQLAlchemy objects are losing their sessions too quickly, before we'll
> manage to lazy-load required stuff.
>
> I'm not too experienced with SQLAlchemy session management, so I would
> welcome any help with investigation.
>
> Thanks,
> Michal
>
>
> [1] https://bugs.launchpad.net/cinder/+bug/1626499
> [2] https://bugs.launchpad.net/cinder/+bug/1517763
> [3] https://bugs.launchpad.net/cinder/+bug/1501838
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-21 Thread Roman Podoliaka
FWIW, there was no new failures in Nova jobs since then.

I'm confused as well why these tests would sporadically take much
longer time to execute. Perhaps we could install something like atop
on our nodes to answer that question.

On Wed, Sep 21, 2016 at 5:46 PM, Ihar Hrachyshka  wrote:
> I just hit that TimeoutException error in neutron functional tests:
>
> http://logs.openstack.org/68/373868/4/check/gate-neutron-dsvm-functional-ubuntu-trusty/4de275e/testr_results.html.gz
>
> It’s a bit weird that we hit that 180 sec timeout because in good runs, the
> test takes ~5 secs.
>
> Do we have a remedy against that kind of failure? I saw nova bumped the
> timeout length for the tests. Is it the approach we should apply across the
> board for other projects?
>
> Ihar
>
>
> Zane Bitter  wrote:
>
>> On 14/09/16 11:44, Mike Bayer wrote:
>>>
>>> On 09/14/2016 11:08 AM, Mike Bayer wrote:

 On 09/14/2016 09:15 AM, Sean Dague wrote:
>
> I noticed the following issues happening quite often now in the
> opportunistic db tests for nova -
>
> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>
>
>
>
> It looks like some race has been introduced where the various db
> connections are not fully isolated from each other like they used to
> be.
> The testing magic for this is buried pretty deep in oslo.db.


 that error message occurs when a connection that is intended against a
 SELECT statement fails to provide a cursor.description attribute.  It is
 typically a driver-level bug in the MySQL world and corresponds to
 mis-handled failure modes from the MySQL connection.

 By "various DB connections are not fully isolated from each other" are
 you suggesting that a single in-Python connection object itself is being
 shared among multiple greenlets?   I'm not aware of a change in oslo.db
 that would be a relationship to such an effect.
>>>
>>>
>>> So, I think by "fully isolated from each other" what you really mean is
>>> "operations upon a connection are not fully isolated from the subsequent
>>> use of that connection", since that's what I see in the logs.  A
>>> connection is attempting to be used during teardown to drop tables,
>>> however it's in this essentially broken state from a PyMySQL
>>> perspective, which would indicate something has gone wrong with this
>>> (pooled) connection in the preceding test that could not be detected or
>>> reverted once the connection was returned to the pool.
>>>
>>> From Roman's observation, it looks like a likely source of this
>>> corruption is a timeout that is interrupting the state of the PyMySQL
>>> connection.   In the preceding stack trace, PyMySQL is encountering a
>>> raise as it attempts to call "self._sock.recv_into(b)", and it seems
>>> like some combination of eventlet's response to signals and the
>>> fixtures.Timeout() fixture is the cause of this interruption.   As an
>>> additional wart, something else is getting involved and turning it into
>>> an IndexError, I'm not sure what that part is yet though I can imagine
>>> that might be SQLAlchemy mis-interpreting what it expects to be a
>>> PyMySQL exception class, since we normally look inside of
>>> exception.args[0] to get the MySQL error code.   With a blank exception
>>> like fixtures.TimeoutException, .args is the empty tuple.
>>>
>>> The PyMySQL connection is now in an invalid state and unable to perform
>>> a SELECT statement correctly, but the connection is not invalidated and
>>> is instead returned to the connection pool in a broken state.  So the
>>> subsequent teardown, if it uses this same connection (which is likely),
>>> fails because the connection has been interrupted in the middle of its
>>> work and not given the chance to clean up.
>>>
>>> Seems like the use of fixtures.Timeout() fixture here is not organized
>>> to work with a database operation in progress, especially an
>>> eventlet-monkeypatched PyMySQL.   Ideally, if something like a timeout
>>> due to a signal handler occurs, the entire connection pool should be
>>> disposed (quickest way, engine.dispose()), or at the very least (and
>>> much more targeted), the connection that's involved should be
>>> invalidated from the pool, e.g. connection.invalidate().
>>>
>>> The change to the environment here would be that this timeout is
>>> happening at all - the reason for that is not yet known.   If oslo.db's
>>> version were involved in this error, I would guess that it would be
>>> related to this timeout condition being caused, and not anything to do
>>> with the connection provisioning.
>>>
> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
> think we need to strongly consider blocking it and revisiting these
> issues post newton.
>>
>>
>> We've been seeing similar errors in Heat since at least 

Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-15 Thread Roman Podoliaka
Sean,

So currently we have a default timeout of 160s in Nova. And
specifically for migration tests we set a scaling factor of 2. Let's
maybe give 2.5 or 3 a try ( https://review.openstack.org/#/c/370805/ )
and make a couple of "rechecks" to see if it helps or not.

In Ocata we could revisit the migrations collapse again to reduce the
number of scripts.

On the testing side, we could probably "cheat" a bit to trade data
safety for performance. E.g. we could set "fsync = off" for PostgreSQL
(https://www.postgresql.org/docs/9.2/static/runtime-config-wal.html).
Similar settings must be available for MySQL as well.

Thanks,
Roman

On Thu, Sep 15, 2016 at 3:07 PM, Sean Dague <s...@dague.net> wrote:
> On 09/15/2016 05:52 AM, Roman Podoliaka wrote:
>> Mike,
>>
>> On Thu, Sep 15, 2016 at 5:48 AM, Mike Bayer <mba...@redhat.com> wrote:
>>
>>> * Prior to oslo.db 4.13.3, did we ever see this "timeout" condition occur?
>>> If so, was it also accompanied by the same "resource closed" condition or
>>> did this second part of the condition only appear at 4.13.3?
>>> * Did we see a similar "timeout" / "resource closed" combination prior to
>>> 4.13.3, just with less frequency?
>>
>> I believe we did -
>> https://bugs.launchpad.net/openstack-ci/+bug/1216851 , although we
>> used mysql-python back then, so the error was slightly different.
>>
>>> * What is the magnitude of the "timeout" this fixture is using, is it on the
>>> order of seconds, minutes, hours?
>>
>> It's set in seconds per project in .testr.conf, e.g.:
>>
>> https://github.com/openstack/nova/blob/master/.testr.conf
>> https://github.com/openstack/ironic/blob/master/.testr.conf
>>
>> In Nova we also have a 'timeout scaling factor' specifically set for
>> migration tests:
>>
>> https://github.com/openstack/nova/blob/master/nova/tests/unit/db/test_migrations.py#L67
>>
>>> * If many minutes or hours, can the test suite be observed to be stuck on
>>> this test?   Has someone tried to run a "SHOW PROCESSLIST" while this
>>> condition is occurring to see what SQL is pausing?
>>
>> We could try to do that in the gate, but I don't expect to see
>> anything interesting: IMO, we'd see regular queries that should have
>> been executed fast, but actually took much longer time (presumably due
>> to heavy disk IO caused by multiple workers running similar tests in
>> parallel).
>>
>>> * Is this failure only present within the Nova test suite or has it been
>>> observed in the test suites of other projects?
>>
>> According to
>>
>> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>>
>> it's mostly Nova, but this has also been observed in Ironic, Manila
>> and Ceilometer. Ironic and Manila have OS_TEST_TIMEOUT value set to 60
>> seconds.
>>
>>> * Is this failure present only on the "database migration" test suite or is
>>> it present in other opportunistic tests, for Nova and others?
>>
>> Based on the console logs I've checked only migration tests failed,
>> but that's probably due to the fact that they are usually the slowest
>> ones (again, presumably due to heavy disk IO).
>>
>>> * Have there been new database migrations added to Nova which are being
>>> exercised here and may be involved?
>>
>> Looks like there were no changes recently:
>>
>> https://review.openstack.org/#/q/project:openstack/nova+status:merged+branch:master+(file:%22%255Enova/db/sqlalchemy/migrate_repo/.*%2524%22+OR+file:%22%255Enova/tests/unit/db/test_migrations.py%2524%22)
>>
>>> I'm not sure how much of an inconvenience it is to downgrade oslo.db. If
>>> downgrading it is feasible, that would at least be a way to eliminate it as
>>> a possibility if these same failures continue to occur, or a way to confirm
>>> its involvement if they disappear.   But if downgrading is disruptive then
>>> there are other things to look at in order to have a better chance at
>>> predicting its involvement.
>>
>> I don't think we need to block oslo.db 4.13.3, unless we clearly see
>> it's this version that causes these failures.
>>
>> I gave version 4.11 (before changes to provisioning) a try on my local
>> machine and see the very same errors when concurrency level is high (
>> http://paste.openstack.org/show/577350/ ), so I don't think the latest
>> oslo.db release has anything to do with the increase of the number 

Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-15 Thread Roman Podoliaka
Mike,

I think the exact error (InterfaceError vs TimeoutException) varies
depending on what code is being executed at the very moment when a
process receives SIGALRM.

I tried to run the tests against PostgreSQL passing very small timeout
values (OS_TEST_TIMEOUT=5 python -m testtools.run
nova.tests.unit.db.test_migrations.TestNovaMigrationsPostgreSQL.test_walk_versions)
and saw both InterfaceError and TimeoutException:

http://paste.openstack.org/show/577410/
http://paste.openstack.org/show/577411/

strace'ing shows that a connection to PostgreSQL is closed right after
SIGARLM is handled:

http://paste.openstack.org/show/577425/

I tried to reproduce that manually by the means of gdb and set a
breakpoint on close():

http://paste.openstack.org/show/577422/

^ looks like psycopg2 closes the connection automatically if a query
was interrupted by SIGALRM.

The corresponding Python level backtrace is:

http://paste.openstack.org/show/577423/

^ i.e. connection closing happens in the middle of cursor.execute() call.

In the end I see a similar InterfaceError:

http://paste.openstack.org/show/577424/

That being said, this does not explain the "DB is in use" part.

Thanks,
Roman

On Thu, Sep 15, 2016 at 6:05 AM, Mike Bayer  wrote:
> There's a different set of logs attached to the launchpad issue, that's not
> what I was looking at before.
>
> These logs are at
> http://logs.openstack.org/90/369490/1/check/gate-nova-tox-db-functional-ubuntu-xenial/085ac3e/console.html#_2016-09-13_14_54_18_098031
> .In these logs, I see something *very* different, not just the MySQL
> tests but the Postgresql tests are definitely hitting conflicts against the
> randomly generated database.
>
> This set of traces, e.g.:
>
> sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) database
> "dbzrtmgbxv" is being accessed by other users
> 2016-09-13 14:54:18.093723 | DETAIL:  There is 1 other session using the
> database.
> 2016-09-13 14:54:18.093736 |  [SQL: 'DROP DATABASE dbzrtmgbxv']
>
> and
>
> File
> "/home/jenkins/workspace/gate-nova-tox-db-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
> line 668, in _rollback_impl
> 2016-09-13 14:54:18.095470 |
> self.engine.dialect.do_rollback(self.connection)
> 2016-09-13 14:54:18.095513 |   File
> "/home/jenkins/workspace/gate-nova-tox-db-functional-ubuntu-xenial/.tox/functional/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
> line 420, in do_rollback
> 2016-09-13 14:54:18.095526 | dbapi_connection.rollback()
> 2016-09-13 14:54:18.095548 | sqlalchemy.exc.InterfaceError:
> (psycopg2.InterfaceError) connection already closed
>
> are a very different animal. For one thing, they're on Postgresql where the
> driver and DB acts extremely rationally.   For another, there's no timeout
> exception here, and not all the conflicts are within the teardown.
>
> Are *these* errors also new as of version 4.13.3 of oslo.db ?   Because here
> I have more suspicion of one particular oslo.db change here.
>
>
>
>
>
>
>
>  fits much more with your initial description
>
>
> On 09/14/2016 10:48 PM, Mike Bayer wrote:
>>
>>
>>
>> On 09/14/2016 07:04 PM, Alan Pevec wrote:

 Olso.db 4.13.3 did hit the scene about the time this showed up. So I
 think we need to strongly consider blocking it and revisiting these
 issues post newton.
>>>
>>>
>>> So that means reverting all stable/newton changes, previous 4.13.x
>>> have been already blocked https://review.openstack.org/365565
>>> How would we proceed, do we need to revert all backport on stable/newton?
>>
>>
>> In case my previous email wasn't clear, I don't *yet* see evidence that
>> the recent 4.13.3 release of oslo.db is the cause of this problem.
>> However, that is only based upon what I see in this stack trace, which
>> is that the test framework is acting predictably (though erroneously)
>> based on the timeout condition which is occurring.   I don't (yet) see a
>> reason that the same effect would not occur prior to 4.13.3 in the face
>> of a signal pre-empting the work of the pymysql driver mid-stream.
>> However, this assumes that the timeout condition itself is not a product
>> of the current oslo.db version and that is not known yet.
>>
>> There's a list of questions that should all be answerable which could
>> assist in giving some hints towards this.
>>
>> There's two parts to the error in the logs.  There's the "timeout"
>> condition, then there is the bad reaction of the PyMySQL driver and the
>> test framework as a result of the operation being interrupted within the
>> test.
>>
>> * Prior to oslo.db 4.13.3, did we ever see this "timeout" condition
>> occur?   If so, was it also accompanied by the same "resource closed"
>> condition or did this second part of the condition only appear at 4.13.3?
>>
>> * Did we see a similar "timeout" / "resource closed" combination prior
>> to 4.13.3, just with less frequency?

Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-15 Thread Roman Podoliaka
Mike,

On Thu, Sep 15, 2016 at 5:48 AM, Mike Bayer  wrote:

> * Prior to oslo.db 4.13.3, did we ever see this "timeout" condition occur?
> If so, was it also accompanied by the same "resource closed" condition or
> did this second part of the condition only appear at 4.13.3?
> * Did we see a similar "timeout" / "resource closed" combination prior to
> 4.13.3, just with less frequency?

I believe we did -
https://bugs.launchpad.net/openstack-ci/+bug/1216851 , although we
used mysql-python back then, so the error was slightly different.

> * What is the magnitude of the "timeout" this fixture is using, is it on the
> order of seconds, minutes, hours?

It's set in seconds per project in .testr.conf, e.g.:

https://github.com/openstack/nova/blob/master/.testr.conf
https://github.com/openstack/ironic/blob/master/.testr.conf

In Nova we also have a 'timeout scaling factor' specifically set for
migration tests:

https://github.com/openstack/nova/blob/master/nova/tests/unit/db/test_migrations.py#L67

> * If many minutes or hours, can the test suite be observed to be stuck on
> this test?   Has someone tried to run a "SHOW PROCESSLIST" while this
> condition is occurring to see what SQL is pausing?

We could try to do that in the gate, but I don't expect to see
anything interesting: IMO, we'd see regular queries that should have
been executed fast, but actually took much longer time (presumably due
to heavy disk IO caused by multiple workers running similar tests in
parallel).

> * Is this failure only present within the Nova test suite or has it been
> observed in the test suites of other projects?

According to

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22

it's mostly Nova, but this has also been observed in Ironic, Manila
and Ceilometer. Ironic and Manila have OS_TEST_TIMEOUT value set to 60
seconds.

> * Is this failure present only on the "database migration" test suite or is
> it present in other opportunistic tests, for Nova and others?

Based on the console logs I've checked only migration tests failed,
but that's probably due to the fact that they are usually the slowest
ones (again, presumably due to heavy disk IO).

> * Have there been new database migrations added to Nova which are being
> exercised here and may be involved?

Looks like there were no changes recently:

https://review.openstack.org/#/q/project:openstack/nova+status:merged+branch:master+(file:%22%255Enova/db/sqlalchemy/migrate_repo/.*%2524%22+OR+file:%22%255Enova/tests/unit/db/test_migrations.py%2524%22)

> I'm not sure how much of an inconvenience it is to downgrade oslo.db. If
> downgrading it is feasible, that would at least be a way to eliminate it as
> a possibility if these same failures continue to occur, or a way to confirm
> its involvement if they disappear.   But if downgrading is disruptive then
> there are other things to look at in order to have a better chance at
> predicting its involvement.

I don't think we need to block oslo.db 4.13.3, unless we clearly see
it's this version that causes these failures.

I gave version 4.11 (before changes to provisioning) a try on my local
machine and see the very same errors when concurrency level is high (
http://paste.openstack.org/show/577350/ ), so I don't think the latest
oslo.db release has anything to do with the increase of the number of
failures on CI.

My current understanding is that the load on gate nodes somehow
increased (either we run more testr workers in parallel now or
apply/test more migrations or just more run VMs per host or the gate
is simply busy at this point of the release cycle), so that we started
to see these timeouts more often.

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Roman Podoliaka
Hmm, looks like we now run more testr workers in parallel (8 instead of 4):

http://logs.openstack.org/76/335676/7/check/gate-nova-python34-db/6841fce/console.html.gz
http://logs.openstack.org/62/369862/3/check/gate-nova-python27-db-ubuntu-xenial/2784de9/console.html

On my machine running Nova migration tests against MySQL is much
slower with 8 workers than with 4 due to disk IO (it's HDD). When they
time out (after 320s) I see the very same TimeoutException and
IndexError (probably something messes up with TimeoutException up the
stack).

On Wed, Sep 14, 2016 at 6:44 PM, Mike Bayer  wrote:
>
>
> On 09/14/2016 11:08 AM, Mike Bayer wrote:
>>
>>
>>
>> On 09/14/2016 09:15 AM, Sean Dague wrote:
>>>
>>> I noticed the following issues happening quite often now in the
>>> opportunistic db tests for nova -
>>>
>>> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>>>
>>>
>>>
>>> It looks like some race has been introduced where the various db
>>> connections are not fully isolated from each other like they used to be.
>>> The testing magic for this is buried pretty deep in oslo.db.
>>
>>
>> that error message occurs when a connection that is intended against a
>> SELECT statement fails to provide a cursor.description attribute.  It is
>> typically a driver-level bug in the MySQL world and corresponds to
>> mis-handled failure modes from the MySQL connection.
>>
>> By "various DB connections are not fully isolated from each other" are
>> you suggesting that a single in-Python connection object itself is being
>> shared among multiple greenlets?   I'm not aware of a change in oslo.db
>> that would be a relationship to such an effect.
>
>
> So, I think by "fully isolated from each other" what you really mean is
> "operations upon a connection are not fully isolated from the subsequent use
> of that connection", since that's what I see in the logs.  A connection is
> attempting to be used during teardown to drop tables, however it's in this
> essentially broken state from a PyMySQL perspective, which would indicate
> something has gone wrong with this (pooled) connection in the preceding test
> that could not be detected or reverted once the connection was returned to
> the pool.
>
> From Roman's observation, it looks like a likely source of this corruption
> is a timeout that is interrupting the state of the PyMySQL connection.   In
> the preceding stack trace, PyMySQL is encountering a raise as it attempts to
> call "self._sock.recv_into(b)", and it seems like some combination of
> eventlet's response to signals and the fixtures.Timeout() fixture is the
> cause of this interruption.   As an additional wart, something else is
> getting involved and turning it into an IndexError, I'm not sure what that
> part is yet though I can imagine that might be SQLAlchemy mis-interpreting
> what it expects to be a PyMySQL exception class, since we normally look
> inside of exception.args[0] to get the MySQL error code.   With a blank
> exception like fixtures.TimeoutException, .args is the empty tuple.
>
> The PyMySQL connection is now in an invalid state and unable to perform a
> SELECT statement correctly, but the connection is not invalidated and is
> instead returned to the connection pool in a broken state.  So the
> subsequent teardown, if it uses this same connection (which is likely),
> fails because the connection has been interrupted in the middle of its work
> and not given the chance to clean up.
>
> Seems like the use of fixtures.Timeout() fixture here is not organized to
> work with a database operation in progress, especially an
> eventlet-monkeypatched PyMySQL.   Ideally, if something like a timeout due
> to a signal handler occurs, the entire connection pool should be disposed
> (quickest way, engine.dispose()), or at the very least (and much more
> targeted), the connection that's involved should be invalidated from the
> pool, e.g. connection.invalidate().
>
> The change to the environment here would be that this timeout is happening
> at all - the reason for that is not yet known.   If oslo.db's version were
> involved in this error, I would guess that it would be related to this
> timeout condition being caused, and not anything to do with the connection
> provisioning.
>
>
>
>
>
>>
>>
>>
>>>
>>> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
>>> think we need to strongly consider blocking it and revisiting these
>>> issues post newton.
>>>
>>> -Sean
>>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-14 Thread Roman Podoliaka
Sean,

I'll take a closer look, but test execution times and errors look suspicious:

ironic.tests.unit.db.sqlalchemy.test_migrations.TestMigrationsPostgreSQL.test_walk_versions
60.002

2016-09-14 14:21:38.756421 |   File
"/home/jenkins/workspace/gate-ironic-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/epolls.py",
line 62, in do_poll
2016-09-14 14:21:38.756435 | return self.poll.poll(seconds)
2016-09-14 14:21:38.756481 |   File
"/home/jenkins/workspace/gate-ironic-python27-db-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
line 52, in signal_handler
2016-09-14 14:21:38.756494 | raise TimeoutException()
2016-09-14 14:21:38.756508 | IndexError: tuple index out of range

Like if the test case was forcibly stopped after timeout.

Thanks,
Roman

On Wed, Sep 14, 2016 at 4:15 PM, Sean Dague  wrote:
> I noticed the following issues happening quite often now in the
> opportunistic db tests for nova -
> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>
>
> It looks like some race has been introduced where the various db
> connections are not fully isolated from each other like they used to be.
> The testing magic for this is buried pretty deep in oslo.db.
>
> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
> think we need to strongly consider blocking it and revisiting these
> issues post newton.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] using the qemu memory overhead in memory resource tracking or scheduling

2016-09-06 Thread Roman Podoliaka
Looks like we already have something like that in the virt drivers interface:

https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L205-L216

which is used in the resource tracker.

On Tue, Sep 6, 2016 at 10:40 AM, Balázs Gibizer
 wrote:
> Hi,
>
> In our cloud we use 1G huge pages for the instance memory.
> We started notice that qemu has a relevant memory overhead
> per domain (something like 300MB per domain). As we use
> huge pages for the instance memory nova scheduler makes
> placement decision based on the available huge pages.
> However in this situation the available host OS memory also
> needs to be considered. Reserving host OS memory deployment
> time is not practical as the needed reservation depends on the
> number of instances that will be run on that host.
>
> Is there any plan in the nova community to use qemu
> memory overhead in the placement decision in case of huge page
> backed instance?
>
> If there is no plan, do you support the idea adding such feature to nova?
>
> Cheers,
> gibi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] resignation from bug czar role

2016-09-05 Thread Roman Podoliaka
+1

I'll be happy to help with triaging of new bugs and reviews of bug fixes.

On Mon, Sep 5, 2016 at 7:33 PM, Timofei Durakov  wrote:
> Hi, folks,
>
> Thanks, Markus, for doing this job! I'm interested in this activity.
>
> Timofey
>
> On Mon, Sep 5, 2016 at 7:20 PM, Sylvain Bauza  wrote:
>>
>>
>>
>> Le 05/09/2016 13:19, Markus Zoeller a écrit :
>>>
>>> TL;DR: bug czar role for Nova is vacant from now on
>>>
>>>
>>> After doing bug triage for ~1 year, which was quiet interesting, it's
>>> time for me to move to different topics. My tasks within the company
>>> internal team are shifting too. Unfortunately less Nova for me in the
>>> next (hopefully short) time. That means I'm resigning from the bug czar
>>> role as of now.
>>>
>>>
>>> Observations in this timeframe
>>> --
>>>
>>> * The quality of most of the bug reports could be better. Very often
>>> they are not actionable. A bug report which isn't actionable burns
>>> resources without any benefit. The pattern I've seen is:
>>>  * 1/3 : invalid because they are support requests or a wrong
>>> understanding
>>>  * 1/3 : could be reasonable but essential information is missing
>>>  * 1/3 : sounds reasonable + has a little info, should be looked at
>>>Very few follow this template which is shown when you open a new
>>> report: https://wiki.openstack.org/wiki/Nova/BugsTeam/BugReportTemplate
>>>
>>> * We get ~40 new bug reports per week. With the current number of people
>>> who do bug triage, the number of overall bug reports doesn't decline. I
>>> started collecting data 6 months ago:
>>>
>>>
>>> http://45.55.105.55:3000/dashboard/db/openstack-bugs?from=now-6M=1
>>>
>>> * I wish the cores would engage more in bug triaging. If one core every
>>> other week would do the bug triage for 1 week, a core would have to do
>>> that only once per dev cycle. I'm aware of the review backlog though :/
>>>
>>> * I wish more non-cores would engage more in bug triaging.
>>>
>>> * We don't have contacts for a lot of areas in Nova:
>>>https://wiki.openstack.org/wiki/Nova/BugTriage#Tag_Owner_List
>>>
>>> * Keeping the bug reports in a consistent state is cumbersome:
>>>http://45.55.105.55:8082/bugs-dashboard.html#tabInProgressStale
>>>We could introduce more automation here.
>>>
>>>
>>> Things we should continue
>>> -
>>>
>>> * Bug reports older that the oldest supported stable release should be
>>>expired. Maybe best when the EOL tag gets applied.
>>>
>>>
>>> https://github.com/openstack-infra/release-tools/blob/master/expire_old_bug_reports.py
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/095654.html
>>>
>>> * We never came to a real conclusion how the ops communicated the RFEs
>>> to us. The way of using "wishlist" bug reports wasn't successful IMO.
>>> The last proposal was to use the ops ML to bring an RFE into some
>>> actionable shape and then create a backlog spec out of it.
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/089365.html
>>>
>>>
>>>
>>> Things we should start
>>> --
>>>
>>> * A cross-project discussion of (easy) ways to collect and send debug
>>> data to upstream OpenStack. Almost no bug report in Nova had the result
>>> of "sosreport" attached although we ask for that in the report template.
>>>
>>>
>>>
>>> Some last words
>>> ---
>>>
>>> * Whoever wants to do the job next, I offer some kind of onboarding.
>>>
>>> * I'll push a change to remove the IRC meetings in the next few days:
>>>http://eavesdrop.openstack.org/#Nova_Bugs_Team_Meeting
>>>
>>> * The tooling I used will still be available at:
>>>
>>> https://github.com/markuszoeller/openstack/tree/master/scripts/launchpad
>>>
>>> * My server which hosts some dashboards will still be available at:
>>>http://45.55.105.55:3000/dashboard/db/openstack-bugs
>>>http://45.55.105.55:8082/bugs-dashboard.html
>>>http://45.55.105.55:8082/bugs-stats.html
>>>
>>> * I did an evaluation of Storyboard in July 2016 and it looks promising.
>>> Give it a shot at: https://storyboard-dev.openstack.org/#!/project/2 If
>>> you don't like something there, push a change, it's Python based.
>>>
>>> * I'll still hang out in the IRC channels, but don't expect much from me.
>>>
>>>
>>> Thanks a lot to the people who helped making Nova a better project by
>>> doing bug triage! Special thanks to auggy who put a lot(!) of effort
>>> into that.
>>>
>>> See you (hopefully) in Barcelona!
>>
>>
>> As said on IRC, hope we'll still see you around, and see you in Barcelona.
>> You made a great job !
>>
>> -Sylvain
>>
>>
>>> --
>>> Regards,
>>> Markus Zoeller (markus_z)
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 

Re: [openstack-dev] [nova] [placement] unresolved topics in resource providers/placement api

2016-07-28 Thread Roman Podoliaka
Hi Chris,

A really good summary, thank you!

On Thu, Jul 28, 2016 at 4:57 PM, Chris Dent  wrote:
> It's pretty clear that were going to need at least an interim and
> maybe permanent endpoint that returns a list of candidate target
> resource providers. This is because, at least initially, the
> placement engine will not be able to resolve all requirements down
> to the one single result and additional filtering may be required in
> the caller.
>
> The question is: Will that need for additional filtering always be
> present and if so do we:
>
> * consider that a bad thing that we should strive to fix by
>   expanding the powers and size of the placement engine
> * consider that a good thing that allows the placement engine to be
>   relatively simple and keeps edge-case behaviors being handled
>   elsewhere
>
> If the latter, then we'll have to consider how an allocation/claim
> in a list of potential allocations can be essentially reserved,
> verified, or rejected.

I'd personally prefer the latter. I don't think placement api will be
able to implement all the filters we currently have in
FilterScheduler.

How about we do a query in two steps:

1) take a list of compute nodes (== resource providers) and apply all
the filters which *can not* (or *are not* at some point) be
implemented in placement-api

2) POST a launch request passing the *pre-filtered* list of resource
providers.  placement api will pick one of those RP, *claim* its
resources and return the claim info

A similar approach could probably be used for assigning weights to RPs
when we pass the list of RPs to placement api.

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [CC neutron] CIDR overlap functionality and constraints

2016-07-20 Thread Roman Podoliaka
Mike,

On Tue, Jul 19, 2016 at 7:40 AM, Mike Bayer  wrote:
> Note that I use the term "function" and not "procedure" to stress that this
> is not a "stored procedure" in the traditional sense of performing complex
> business logic and persistence operations - this CIDR function performs a
> calculation that is not at all specific to Openstack, and is provided
> already by other databases as a built-in, and nothing else.

My only concern would be that based on my previous experience these
things easily get complicated and large pieces of hard to maintain
logic. We need to be careful here and consider new additions on case
by case basis.

> The general verbosity and unfamiliarity of these well known SQL features is
> understandably being met with trepidation.  I've identified that this
> trepidation is likely rooted in the fact that unlike the many other
> elaborate SQL features we use like ALTER TABLE, savepoints, subqueries,
> SELECT FOR UPDATE, isolation levels, etc. etc., there is no warm and fuzzy
> abstraction layer here that is both greatly reducing the amount of explicit
> code needed to produce and upgrade the feature, as well as indicating that
> "someone else" will fix this system when it has problems.
>
> Rather than hobbling the entire Openstack ecosystem to using a small subset
> of what our relational databases are capable of, I'd like to propose that
> preferably somewhere in oslo.db, or elsewhere, we begin providing the
> foundation for the use of SQL features that are rooted in mechanisms such as
> triggers and small use of stored functions, and more specifically begin to
> produce network-math SQL features as the public API, starting with this one.

If people are already using that, we might as well put this into
oslo.db and make their lives easier a bit. It would be really nice to
have fallback to Python implementations of such functions whenever
possible, though.

This will likely make it harder to change a DB backend for a
particular project in the future, if it uses this advanced API, but
IMO it's really up to consuming projects to decide which DB backends
they support and test. They just should be aware of what they are
doing and weigh all the pros and cons first.

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Roman Podoliaka
That's exactly what https://github.com/koder-ua/os_api is for: it
polls status changes in a separate thread and then updates the
futures, so that you can wait on multiple futures at once.

On Mon, Jul 4, 2016 at 2:19 PM, Denis Makogon <lildee1...@gmail.com> wrote:
>
>
> 2016-07-04 13:22 GMT+03:00 Roman Podoliaka <rpodoly...@mirantis.com>:
>>
>> Denis,
>>
>> >  Major problem
>> > appears when you trying to provision resource that requires to have some
>> > time to reach ACTIVE/COMPLETED state (like, nova instance, stack, trove
>> > database, etc.) and you have to use polling for status changes and in
>> > general polling requires to send HTTP requests within specific time
>> > frame
>> > defined by number of polling retries and delays between them (almost all
>> > PaaS solutions in OpenStack are doing it that might be the case of
>> > distributed backend services, but not for async frameworks).
>>
>> How would an asynchronous client help you avoid polling here? You'd
>> need some sort of a streaming API producing events on the server side.
>>
>
> No, it would not help me to get rid of polling, but using async requests
> will allow to proceed with next independent async tasks while awaiting
> result on async HTTP request.
>
>>
>> If you are simply looking for a better API around polling in OS
>> clients, take a look at https://github.com/koder-ua/os_api , which is
>> based on futures (be aware that HTTP requests are still *synchronous*
>> under the hood).
>>
>> Thanks,
>> Roman
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Fail build request if we can't inject files?

2016-07-04 Thread Roman Podoliaka
Hi all,

> Won't the user provided files also get made available by the config drive /
> metadata service ?

I believe, they should.

Not sure it's the same problem, so just FYI: we recently encountered
an issue with VFAT formatted config drives when nova-compute is
deployed on CentOS or RHEL:

https://bugs.launchpad.net/cirros/+bug/1598783
https://bugs.launchpad.net/mos/+bug/1587960

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-07-04 Thread Roman Podoliaka
Denis,

>  Major problem
> appears when you trying to provision resource that requires to have some
> time to reach ACTIVE/COMPLETED state (like, nova instance, stack, trove
> database, etc.) and you have to use polling for status changes and in
> general polling requires to send HTTP requests within specific time frame
> defined by number of polling retries and delays between them (almost all
> PaaS solutions in OpenStack are doing it that might be the case of
> distributed backend services, but not for async frameworks).

How would an asynchronous client help you avoid polling here? You'd
need some sort of a streaming API producing events on the server side.

If you are simply looking for a better API around polling in OS
clients, take a look at https://github.com/koder-ua/os_api , which is
based on futures (be aware that HTTP requests are still *synchronous*
under the hood).

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Getting rid of lazy init for engine facade

2016-05-11 Thread Roman Podoliaka
Hi Anna,

Thank you for working on this in Neutron!

EngineFacade is initialized lazily internally - you don't have to do
anything for that in Neutron (you *had to* with "old" EngineFacade -
this is the boiler plate your patch removes).

I believe, you should be able to call configure(...) unconditionally
as soon as you have parsed the config files. Why do you want to
introduce a new conditional?

Moreover, if you only have connections to one database (unlike Nova,
which also has Cells databases), you don't need to call configure() at
all - EngineFacade will read the values of config options registered
by oslo.db on the first attempt to get a session / connection.

Thanks,
Roman

On Wed, May 11, 2016 at 4:41 PM, Anna Kamyshnikova
 wrote:
> Hi guys!
>
> I'm working on adoption of new engine facade from oslo.db for Neutron [1].
> This work requires us to get rid of lazy init for engine facade. [2] I
> propose change [3] that adds configure_db parameter which is False by
> default, so if work with db will be required configure_db=True should be
> passed manually.
>
> NOTE: this will affect all external repos depending on Neutron!
>
> I'm considering making this argument mandatory to force every project
> depending on this function explicitly make a decision there.
>
> I want to encourage reviewers to take a look at this change and l'm looking
> forward all suggestions.
>
> [1] - https://bugs.launchpad.net/neutron/+bug/1520719
> [2] -
> http://specs.openstack.org/openstack/oslo-specs/specs/kilo/make-enginefacade-a-facade.html
> [3] - https://review.openstack.org/#/c/312393/
>
> --
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][MySQL][DLM][Oslo][DB][Trove][Galera][operators] Multi-master writes look OK, OCF RA and more things

2016-04-29 Thread Roman Podoliaka
Hi Bogdan,

Thank you for sharing this! I'll need to familiarize myself with this
Jepsen thing, but overall it looks interesting.

As it turns out, we already run Galera in multi-writer mode in Fuel
unintentionally in the case, when the active MySQL node goes down,
HAProxy starts opening connections to a backup, then the active goes
up again, HAProxy starts opening connections to the original MySQL
node, but OpenStack services may still have connections opened to the
backup in their connection pools - so now you may have connections to
multiple MySQL nodes at the same time, exactly what you wanted to
avoid by using active/backup in the HAProxy configuration.

^ this actually leads to an interesting issue [1], when the DB state
committed on one node is not immediately available on another one.
Replication lag can be controlled  via session variables [2], but that
does not always help: e.g. in [1] Nova first goes to Neutron to create
a new floating IP, gets 201 (and Neutron actually *commits* the DB
transaction) and then makes another REST API request to get a list of
floating IPs by address - the latter can be served by another
neutron-server, connected to another Galera node, which does not have
the latest state applied yet due to 'slave lag' - it can happen that
the list will be empty. Unfortunately, 'wsrep_sync_wait' can't help
here, as it's two different REST API requests, potentially served by
two different neutron-server instances.

Basically, you'd need to *always* wait for the latest state to be
applied before executing any queries, which Galera is trying to avoid
for performance reasons.

Thanks,
Roman

[1] https://bugs.launchpad.net/fuel/+bug/1529937
[2] 
http://galeracluster.com/2015/06/achieving-read-after-write-semantics-with-galera/

On Fri, Apr 22, 2016 at 10:42 AM, Bogdan Dobrelya
 wrote:
> [crossposting to openstack-operat...@lists.openstack.org]
>
> Hello.
> I wrote this paper [0] to demonstrate an approach how we can leverage a
> Jepsen framework for QA/CI/CD pipeline for OpenStack projects like Oslo
> (DB) or Trove, Tooz DLM and perhaps for any integration projects which
> rely on distributed systems. Although all tests are yet to be finished,
> results are quite visible, so I better off share early for a review,
> discussion and comments.
>
> I have similar tests done for the RabbitMQ OCF RA clusterers as well,
> although have yet wrote a report.
>
> PS. I'm sorry for so many tags I placed in the topic header, should I've
> used just "all" :) ? Have a nice weekends and take care!
>
> [0] https://goo.gl/VHyIIE
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] Excessively high greenlet default + excessively low connection pool defaults leads to connection pool latency, timeout errors, idle database connections / workers

2016-02-23 Thread Roman Podoliaka
That's what I tried first :)

For some reason load distribution was still uneven. I'll check this
again, maybe I missed something.

On Tue, Feb 23, 2016 at 5:37 PM, Chris Friesen
<chris.frie...@windriver.com> wrote:
> On 02/23/2016 05:25 AM, Roman Podoliaka wrote:
>
>> So looks like it's two related problems here:
>>
>> 1) the distribution of load between workers is uneven. One way to fix
>> this is to decrease the default number of greenlets in pool [2], which
>> will effectively cause a particular worker to give up new connections
>> to other forks, as soon as there are no more greenlets available in
>> the pool to process incoming requests. But this alone will *only* be
>> effective when the concurrency level is greater than the number of
>> greenlets in pool. Another way would be to add a context switch to
>> eventlet accept() loop [8] right after spawn_n() - this is what I've
>> got with greenthread.sleep(0.05) [9][10] (the trade off is that we now
>> only can accept() 1/ 0.05 = 20 new connections per second per worker -
>> I'll try to experiment with numbers here).
>
>
> Would greenthread.sleep(0) be enough to trigger a context switch?
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Roman Podoliaka
On Tue, Feb 23, 2016 at 7:23 PM, Mike Bayer  wrote:
> Also I'm not
> sure how the enginefacade integration with nova didn't already cover this, I
> guess it doesn't yet impact all of those existing MySQLOpportunisticTest
> classes it has.

Yeah, I guess it's the first test case that actually tries to access
DB via functions in nova/sqlalchemy/api.py, other test cases were
using self.engine/self.sessionmaker attributes provided by
MySQLOpportunisticTestCase directly, thus, when integration with
enginefacade was done we missed this.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Roman Podoliaka
Ok, so I uploaded https://review.openstack.org/#/c/283728/ on the top
of Sean's patches.

We'll take a closer look tomorrow, if we can just put something like
this to oslo.db/sqlalchemy/test_base as a public test fixture.

On Tue, Feb 23, 2016 at 7:23 PM, Mike Bayer <mba...@redhat.com> wrote:
>
>
> On 02/23/2016 12:06 PM, Roman Podoliaka wrote:
>>
>> Mike,
>>
>> I think that won't work as Nova creates its own instance of
>> _TransactionContextManager:
>>
>>
>> https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/db/sqlalchemy/api.py#L134-L135
>>
>> Maybe we could change _TestTransactionFactory a bit, so that it takes
>> a context manager instance as an argument?
>
>
> If they aren't using the enginefacade global context, then that's even
> easier.  They should be able to drop in _TestTransactionFactory or any other
> TransactionFactory into the _TransactionContextManager they have and then
> swap it back.   If there aren't API methods for this already, because
> everything in enginefacade is underscored, feel free to add. Also I'm not
> sure how the enginefacade integration with nova didn't already cover this, I
> guess it doesn't yet impact all of those existing MySQLOpportunisticTest
> classes it has.
>
>
>
>
>
>>
>> On Tue, Feb 23, 2016 at 6:09 PM, Mike Bayer <mba...@redhat.com> wrote:
>>>
>>>
>>>
>>> On 02/23/2016 09:22 AM, Sean Dague wrote:
>>>>
>>>>
>>>> With enginefascade working coming into projects, there seems to be some
>>>> new bits around oslo.db global sessions.
>>>>
>>>> The effect of this on tests is a little problematic. Because it builds
>>>> global state which couples between tests. I've got a review to use mysql
>>>> connection explicitly for some Nova functional tests which correctly
>>>> fails and exposes a bug when run individually. However, when run in a
>>>> full test run, the global session means that it's not run against mysql,
>>>> it's run against sqlite, and passes.
>>>>
>>>> https://review.openstack.org/#/c/283364/
>>>>
>>>> We need something that's the inverse of session.configure() -
>>>>
>>>>
>>>> https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/tests/fixtures.py#L205
>>>> to reset the global session.
>>>>
>>>> Pointers would be welcomed.
>>>
>>>
>>>
>>> from the oslo.db side, we have frameworks for testing that handle all of
>>> these details (e.g. oslo_db.sqlalchemy.test_base.DbTestCase and
>>> DbFixture).
>>> I don't believe Nova uses these frameworks (I think it should long term),
>>> but for now the techniques used by oslo.db's framework should likely be
>>> used:
>>>
>>> self.test.enginefacade = enginefacade._TestTransactionFactory(
>>>  self.test.engine, self.test.sessionmaker, apply_global=True,
>>>  synchronous_reader=True)
>>>
>>> self.addCleanup(self.test.enginefacade.dispose_global)
>>>
>>>
>>> The above apply_global flag indicates that the global enginefacade should
>>> use this TestTransactionFactory until disposed.
>>>
>>>
>>>
>>>
>>>
>>>>
>>>>  -Sean
>>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Roman Podoliaka
Mike,

I think that won't work as Nova creates its own instance of
_TransactionContextManager:

https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/db/sqlalchemy/api.py#L134-L135

Maybe we could change _TestTransactionFactory a bit, so that it takes
a context manager instance as an argument?

On Tue, Feb 23, 2016 at 6:09 PM, Mike Bayer  wrote:
>
>
> On 02/23/2016 09:22 AM, Sean Dague wrote:
>>
>> With enginefascade working coming into projects, there seems to be some
>> new bits around oslo.db global sessions.
>>
>> The effect of this on tests is a little problematic. Because it builds
>> global state which couples between tests. I've got a review to use mysql
>> connection explicitly for some Nova functional tests which correctly
>> fails and exposes a bug when run individually. However, when run in a
>> full test run, the global session means that it's not run against mysql,
>> it's run against sqlite, and passes.
>>
>> https://review.openstack.org/#/c/283364/
>>
>> We need something that's the inverse of session.configure() -
>>
>> https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/tests/fixtures.py#L205
>> to reset the global session.
>>
>> Pointers would be welcomed.
>
>
> from the oslo.db side, we have frameworks for testing that handle all of
> these details (e.g. oslo_db.sqlalchemy.test_base.DbTestCase and DbFixture).
> I don't believe Nova uses these frameworks (I think it should long term),
> but for now the techniques used by oslo.db's framework should likely be
> used:
>
> self.test.enginefacade = enginefacade._TestTransactionFactory(
> self.test.engine, self.test.sessionmaker, apply_global=True,
> synchronous_reader=True)
>
> self.addCleanup(self.test.enginefacade.dispose_global)
>
>
> The above apply_global flag indicates that the global enginefacade should
> use this TestTransactionFactory until disposed.
>
>
>
>
>
>>
>> -Sean
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] Excessively high greenlet default + excessively low connection pool defaults leads to connection pool latency, timeout errors, idle database connections / workers

2016-02-23 Thread Roman Podoliaka
Hi all,

I've taken another look at this in order to propose patches to
oslo.service/oslo.db, so that we have better defaults for WSGI
greenlets number / max DB connections overflow [1] [2], which would be
more suitable for DB oriented services like our APIs are.

I used the Mike's snippet [3] for testing, 10 workers (i.e. forks)
served the WSGI app, ab concurrency level was set to 100, 3000
requests were sent.

With our default settings (1000 greenlets per worker, 5 connections in
the DB pool, 10 connections max overflow, 30 seconds timeout waiting
for a connection to become available), ~10-15 requests out of 3000
will fail with 500 due to pool timeout issue on every run [4].

As it was expected, load is distributed unevenly between workers: htop
shows that one worker is busy, while others are not [5]. Tracing
accept() calls with perf-events (sudo perf trace -e accept --pid=$PIDS
-S) allows to see the exact number of requests served by each worker
[6] - we can see that the "busy" worker served almost twice as many
WSGI requests as any other worker did. perf output [7] shows an
interesting pattern: each eventlet WSGI worker sleeps in accept()
waiting for new connections to become available in the queue handled
by the kernel; when there is a new connection available, a random
worker wakes up and tries to accept() as many connections as possible.

Reading the source code of eventlet WSGI server [8] suggests that it
will accept() new connections as long as they are available (and as
long as there are more available greenthreads in the pool) before
starting to process already accept()'ed ones (spawn_n() only creates a
new greenthread and schedules it be executed "later"). Giving the fact
we have 1000 greenlets in the pool, there is a high probability we'll
end up with an overloaded worker. If handling of these requests
involves doing DB queries, we have only 5 (pool) + 10 (max overflow)
DB connections available, others will have to wait (and may eventually
time out after 30 seconds).

So looks like it's two related problems here:

1) the distribution of load between workers is uneven. One way to fix
this is to decrease the default number of greenlets in pool [2], which
will effectively cause a particular worker to give up new connections
to other forks, as soon as there are no more greenlets available in
the pool to process incoming requests. But this alone will *only* be
effective when the concurrency level is greater than the number of
greenlets in pool. Another way would be to add a context switch to
eventlet accept() loop [8] right after spawn_n() - this is what I've
got with greenthread.sleep(0.05) [9][10] (the trade off is that we now
only can accept() 1/ 0.05 = 20 new connections per second per worker -
I'll try to experiment with numbers here).

2) even if the distribution of load is even, we still have to be able
to process requests according to the max level of concurrency,
effectively set by the number of greenlets in pool. For DB oriented
services that means we need to have DB connections available. [1]
increases the
default max_overflow value to allow SQLAlchemy to open additional
connections to a DB and handle spikes of concurrent requests.
Increasing max_overflow value further will probably lead to max number
of connection errors in RDBMs servers.

As it was already mentioned in this thread, the rule of thumb is that
for DB oriented WSGI services the max_overflow value should be at
least close to the number of greenlets. Running tests on my machine
shows that having 100 greenlets in pool / 5 DB connections in pool /
50 max_overflow / 30 seconds pool timeout allows to handle up to 500
concurrent requests without seeing pool timeout errors.

Thanks,
Roman

[1] https://review.openstack.org/#/c/269186/
[2] https://review.openstack.org/#/c/269188/
[3] https://gist.github.com/zzzeek/c69138fd0d0b3e553a1f
[4] http://paste.openstack.org/show/487867/
[5] http://imgur.com/vEWJmrd
[6] http://imgur.com/FOZ2htf
[7] http://paste.openstack.org/show/487871/
[8] https://github.com/eventlet/eventlet/blob/master/eventlet/wsgi.py#L862-L869
[9] http://paste.openstack.org/show/487874/
[10] http://imgur.com/IuukDiD

On Mon, Jan 11, 2016 at 4:05 PM, Mike Bayer  wrote:
>
>
> On 01/11/2016 05:39 AM, Radomir Dopieralski wrote:
>> On 01/08/2016 09:51 PM, Mike Bayer wrote:
>>>
>>>
>>> On 01/08/2016 04:44 AM, Radomir Dopieralski wrote:
 On 01/07/2016 05:55 PM, Mike Bayer wrote:

> but also even if you're under something like
> mod_wsgi, you can spawn a child process or worker thread regardless.
> You always have a Python interpreter running and all the things it can
> do.

 Actually you can't, reliably. Or, more precisely, you really shouldn't.
 Most web servers out there expect to do their own process/thread
 management and get really embarrassed if you do something like this,
 resulting in weird stuff happening.
>>>
>>> I have to disagree with this as an 

Re: [openstack-dev] [ceilometer]ceilometer-collector high CPU usage

2016-02-17 Thread Roman Podoliaka
Hi all,

Based on my investigation [1], I believe this is a combined effect of
using eventlet and condition variables on Python 2.x. When heartbeats
are enabled in oslo.messaging, you'll see polling with very small
timeout values. This must not waste a lot of CPU time, still it is
kind of annoying.

Thanks,
Roman

[1] https://bugs.launchpad.net/mos/+bug/1380220

On Wed, Feb 17, 2016 at 3:06 PM, gordon chung  wrote:
> hi,
>
> this seems to be similar to a bug we were tracking in earlier[1].
> basically, any service with a listener never seemed to idle properly.
>
> based on earlier investigation, we found it relates to the heartbeat
> functionality in oslo.messaging. i'm not entirely sure if it's because
> of it or some combination of things including it. the short answer, is
> to disable heartbeat by setting heartbeat_timeout_threshold = 0 and see
> if it fixes your cpu usage. you can track the comments in bug.
>
> [1] https://bugs.launchpad.net/oslo.messaging/+bug/1478135
>
> On 17/02/2016 4:14 AM, Gyorgy Szombathelyi wrote:
>> Hi!
>>
>> Excuse me, if the following question/problem is a basic one, already known 
>> problem,
>> or even a bad setup on my side.
>>
>> I just noticed that the most CPU consuming process in an idle
>> OpenStack cluster is ceilometer-collector. When there are only
>> 10-15 samples/minute, it just constantly eats about 15-20% CPU.
>>
>> I started to debug, and noticed that it epoll()s constantly with a zero
>> timeout, so it seems it just polls for events in a tight loop.
>> I found out that the _maybe_ the python side of the problem is
>> oslo_messaging.get_notification_listener() with the eventlet executor.
>> A quick search showed that this function is only used in aodh_listener and
>> ceilometer_collector, and both are using relatively high CPU even if they're
>> just 'listening'.
>>
>> My skills for further debugging is limited, but I'm just curious why this 
>> listener
>> uses so much CPU, while other executors, which are using eventlet, are not 
>> that
>> bad. Excuse me, if it was a basic question, already known problem, or even a 
>> bad
>> setup on my side.
>>
>> Br,
>> György
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] Excessively high greenlet default + excessively low connection pool defaults leads to connection pool latency, timeout errors, idle database connections / workers

2016-01-06 Thread Roman Podoliaka
Hi Mike,

Thank you for this brilliant analysis! We've been seeing such timeout
errors in downstream periodically and this is the first time someone
has analysed the root cause thoroughly.

On Fri, Dec 18, 2015 at 10:33 PM, Mike Bayer  wrote:
> Hi all -
>
> Let me start out with the assumptions I'm going from for what I want to
> talk about.
>
> 1. I'm looking at Nova right now, but I think similar things are going
> on in other Openstack apps.
>
> 2. Settings that we see in nova.conf, including:
>
> #wsgi_default_pool_size = 1000
> #max_pool_size = 
> #max_overflow = 
> #osapi_compute_workers = 
> #metadata_workers = 
>
>
> are often not understood by deployers, and/or are left unchanged in a
> wide variety of scenarios.If you are in fact working for deployers
> that *do* change these values to something totally different, then you
> might not be impacted here, and if it turns out that everyone changes
> all these settings in real-world scenarios and zzzeek you are just being
> silly thinking nobody sets these appropriately, then fooey for me, I guess.

My understanding is that DB connection pool / workers number options
are usually changed, while the number of eventlet greenlets is not:

http://codesearch.openstack.org/?q=wsgi_default_pool_size=nope==
http://codesearch.openstack.org/?q=max_pool_size=nope==

I think it's for "historical" reasons when MySQL-Python was considered
to be the default DB API driver and we had to work around its
concurrency issues with eventlet by using multiple forks of services.

But as you point out even with a non-blocking DB API driver like
pymysql we are still having problems with timeouts due to pool vs
greenlets number settings.

> 3. There's talk about more Openstack services, at least Nova from what I
> heard the other day, moving to be based on a real webserver deployment
> in any case, the same way Keystone is.   To the degree this is true
> would also mitigate what I'm seeing but still, there's good changes that
> can be made here.

I think, ideally we'd like to have "wsgi container agnostic" apps not
coupled to eventlet or anything else - so that it will be up to a
deployer to choose the application server.

> But if we only have a super low number of greenlets and only a few dozen
> workers, what happens if we have more than 240 requests come in at once,
> aren't those connections going to get rejected?  No way!  eventlet's
> networking system is better than that, those connection requests just
> get queued up in any case, waiting for a greenlet to be available.  Play
> with the script and its settings to see.

Right, it must be controlled by the backlog argument value here:

https://github.com/openstack/oslo.service/blob/master/oslo_service/wsgi.py#L80

> But if we're blocking any connection attempts based on what's available
> at the database level, aren't we under-utilizing for API calls that need
> to do a lot of other things besides DB access?  The answer is that may
> very well be true!   Which makes the guidance more complicated based on
> what service we are talking about.   So here, my guidance is oriented
> towards those Openstack services that are primarily doing database
> access as their primary work.

I believe, all our APIs are pretty much DB oriented.

> Given the above caveat, I'm hoping people can look at this and verify my
> assumptions and the results.Assuming I am not just drunk on eggnog,
> what would my recommendations be?  Basically:
>
> 1. at least for DB-oriented services, the number of 1000 greenlets
> should be *way* *way* lower, and we most likely should allow for a lot
> more connections to be used temporarily within a particular worker,
> which means I'd take the max_overflow setting and default it to like 50,
> or 100.   The Greenlet number should then be very similar to the
> max_overflow number, and maybe even a little less, as Nova API calls
> right now often will use more than one connection concurrently.

I suggest we tweak the config options values in both oslo.service and
oslo.db to provide reasonable production defaults and document the
"correlation" between DB connection pool / greenlet workers number
settings.

> 2. longer term, let's please drop the eventlet pool thing and just use a
> real web server!  (but still tune the connection pool appropriately).  A
> real web server will at least know how to efficiently direct requests to
> worker processes.   If all Openstack workers were configurable under a
> single web server config, that would also be a nice way to centralize
> tuning and profiling overall.

I'd rather we simply not couple to eventlet unconditionally and allow
deployers to choose the WSGI container they want to use.

Thanks,
Roman

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [oslo.db][sqlalchemy][mistral] Configuring default transaction isolation level

2015-12-08 Thread Roman Podoliaka
Hi Moshe,

Feel free to submit a patch! This seems to be something we want to be
able to configure.

Thanks,
Roman

On Tue, Dec 8, 2015 at 9:41 AM, ELISHA, Moshe (Moshe)
 wrote:
> Hi,
>
>
>
> We at Mistral want to move from the default transaction isolation level of
> REPEATABLE READ to READ COMMITTED as part of a bugfix[1].
>
>
>
> I did not find a way to pass the isolation level to sqlachemy using oslo.db
> and the current solution is to use monkey-patching[2] that adds the
> “isolation_level” property.
>
>
>
> Is there currently a better way to set the default isolation level?
>
> If not – I will create a BP for it.
>
>
>
> Thanks.
>
>
>
> [1] https://review.openstack.org/#/c/253819
>
> [2] https://review.openstack.org/#/c/253819/11/mistral/db/sqlalchemy/base.py
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] do we really need websockify with numpy speedups?

2015-11-26 Thread Roman Podoliaka
Hi Pavlo,

Can we just use a wheel package for numpy instead?

Thanks,
Roman

On Thu, Nov 26, 2015 at 3:00 PM, Pavlo Shchelokovskyy
 wrote:
> Hi again,
>
> I've went on and created a proper pull request to websockify [0], comment
> there if you think we need it :)
>
> I also realized that there is another option, which is to include
> python-numpy to files/debs/ironic and files/debs/nova (strangely it is
> already present in rpms/ for nova, noVNC and spice services).
> This should install a pre-compiled version from distro repos, and should
> also speed things up.
>
> Any comments welcome.
>
> [0] https://github.com/kanaka/websockify/pull/212
>
> Best regards,
>
> On Thu, Nov 26, 2015 at 1:44 PM Pavlo Shchelokovskyy
>  wrote:
>>
>> Hi all,
>>
>> I was long puzzled why devstack is installing numpy. Being a fantastic
>> package itself, it has the drawback of taking about 4 minutes to compile its
>> C extensions when installing on our gates (e.g. [0]). I finally took time to
>> research and here is what I've found:
>>
>> it is used only by websockify package (installed by AFAIK ironic and nova
>> only), and there it is used to speed up the HyBi protocol. Although the code
>> itself has a path to work without numpy installed [1], the setup.py of
>> websockify declares numpy as a hard dependency [2].
>>
>> My question is do we really need those speedups? Do we test any feature
>> requiring fast HyBi support on gates? Not installing numpy would shave 4
>> minutes off any gate job that is installing Nova or Ironic, which seems like
>> a good deal to me.
>>
>> If we decide to save this time, I have prepared a pull request for
>> websockify that moves numpy requirement to "extras" [3]. As a consequence
>> numpy will not be installed by default as dependency, but still possible to
>> install with e.g. "pip install websockify[fastHyBi]", and package builders
>> can also specify numpy as hard dependency for websockify package in package
>> specs.
>>
>> What do you think?
>>
>> [0]
>> http://logs.openstack.org/82/236982/6/check/gate-tempest-dsvm-ironic-agent_ssh/1141960/logs/devstacklog.txt.gz#_2015-11-11_19_51_40_784
>> [1]
>> https://github.com/kanaka/websockify/blob/master/websockify/websocket.py#L143
>> [2] https://github.com/kanaka/websockify/blob/master/setup.py#L37
>> [3]
>> https://github.com/pshchelo/websockify/commit/0b1655e73ea13b4fba9c6fb4122adb1435d5ce1a
>>
>> Best regards,
>> --
>> Dr. Pavlo Shchelokovskyy
>> Senior Software Engineer
>> Mirantis Inc
>> www.mirantis.com
>
> --
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][sqlalchemy] rollback after commit

2015-09-16 Thread Roman Podoliaka
Hi Gareth,

Right, 'SELECT 1' issued at the beginning of every transaction is a
pessimistic check to detect disconnects early. oslo.db will create a
new DB connection (as well as invalidate all the existing connections
to the same DB in the pool) and retry the transaction once [1]

ROLLBACK you are referring to is issued on returning of a connection
to the pool. This is a SQLAlchemy configurable feature [2] . The
reasoning behind this is that all connections are in transactional
mode by default (there is always an ongoing transaction, you just need
to do COMMITs) and they are pooled: if we don't issue a ROLLBACK here,
it's possible that someone will return a connection to the pool not
ending the transaction properly, which can possibly lead to deadlocks
(DB rows remain locked) and stale data reads, when the very same DB
connection is checked out from the pool again and used by someone
else.

As long as you finish all your transactions with either COMMIT or
ROLLBACK before returning a connection to the pool, these forced
ROLLBACKs must be cheap, as the RDBMS doesn't have to maintain some
state bound to this transaction (as it's just begun and you ended the
previous transaction on this connection). Still, it protects you from
the cases, when something went wrong and you forgot to end the
transaction.

Thanks,
Roman

[1] 
https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/engines.py#L53-L82
[2] 
http://docs.sqlalchemy.org/en/latest/core/pooling.html#sqlalchemy.pool.Pool.params.reset_on_return

On Wed, Sep 16, 2015 at 12:13 PM, Gareth  wrote:
> Hi DB experts,
>
> I'm using mysql now and have general log like:
>
> 1397 Query SELECT 1
>
> 1397 Query SELECT 
>
> 1397 Query UPDATE 
>
> 1397 Query COMMIT
>
> 1397 Query ROLLBACK
>
> I found there always is 'SELECT 1' before real queries and 'COMMIT'
> and 'ROLLBACK' after. I know 'SELECT 1' is the lowest cost for check
> db's availability and 'COMMIT' is for persistence. But why is a
> 'ROLLBACK' here? Is this 'ROLLBACK' the behaviour of oslo.db or
> sqlchemy?
>
>
>
> --
> Gareth
>
> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
> OpenStack contributor, kun_huang@freenode
> My promise: if you find any spelling or grammar mistakes in my email
> from Mar 1 2013, notify me
> and I'll donate $1 or ¥1 to an open organization you specify.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][CI] gate-{name}-requirements job fails on stable/juno

2015-07-24 Thread Roman Podoliaka
Hi all,

In oslo.db we recently hit a requirements checking job failure [0].
It's caused by the fact the job tries to import a module from
openstack_requirements package, which is missing in stable/juno branch
of requirements project [1].

The job must have been broken for stable/juno since [2] was merged.
stable/kilo and master are ok, as corresponding branches of
requirements project already have openstack_requirements package.

There are more failures in other projects [3].

Please advice on how we are going to fix this to unblock changes to
requirements for stable/juno.

Thanks,
Roman

[0] 
http://logs.openstack.org/75/186175/1/check/gate-oslo.db-requirements/b3a973d/console.html
[1] https://github.com/openstack/requirements/tree/stable/juno
[2] https://review.openstack.org/#/c/195857/2
[3] 
https://review.openstack.org/#/q/file:requirements.txt+branch:stable/juno+status:open,n,z

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][CI] gate-{name}-requirements job fails on stable/juno

2015-07-24 Thread Roman Podoliaka
Oh, I missed that one!

Thank you, Ihar!

On Fri, Jul 24, 2015 at 2:45 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 On 07/24/2015 12:37 PM, Roman Podoliaka wrote:
 Hi all,

 In oslo.db we recently hit a requirements checking job failure
 [0]. It's caused by the fact the job tries to import a module from
 openstack_requirements package, which is missing in stable/juno
 branch of requirements project [1].

 The job must have been broken for stable/juno since [2] was
 merged. stable/kilo and master are ok, as corresponding branches
 of requirements project already have openstack_requirements
 package.

 There are more failures in other projects [3].

 Please advice on how we are going to fix this to unblock changes
 to requirements for stable/juno.


 https://review.openstack.org/#/c/198146/

 Ihar
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2

 iQEcBAEBCAAGBQJVsiVaAAoJEC5aWaUY1u57IV8H/j2MEjPc6OQczUqlYPdZAjT7
 ++sBg44AvZbrHKrygxqSjfNWJJXUq6eGshfZ3MBcLRMdiFZabj8dYVa5ouknvDAG
 ouDqfsiYgy9PxJKNY4neMY8kW5u63krZcmIZe6gX4540zSgpu+lbv/qG1w9Z+QZi
 bfav/6eYzdqlAaJxhOnUHkQGZA9gKpTjNhQpf/OmMoAmrE9AAmwOO4gAeB031vjT
 cuaoXLfQ8NHNfTOZQvFe8qSDxfjamQk4sETYxQsA7zN5+yI2x1B1W9xaadFBrahq
 ufSHR4cQWCWwVvPB0QES1aSa/dlP77ZrvzMViiSe4RK27G1lAXz/vlSqbosdZj0=
 =KW4p
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-07-23 Thread Roman Podoliaka
Hi all,

FWIW, this is exactly what we have in oslo libs, e.g. in oslo.db [0]

Putting all Nova options into one big file is probably not a good
idea, still we could consider storing those per-package (per backend,
per driver, etc), rather than per Python module to reduce the number
of possible circular imports when using import_opt() helper.

Thanks,
Roman

[0] https://github.com/openstack/oslo.db/blob/master/oslo_db/options.py

On Thu, Jul 23, 2015 at 6:39 PM, Kevin L. Mitchell
kevin.mitch...@rackspace.com wrote:
 On Thu, 2015-07-23 at 17:55 +0300, mhorban wrote:
 During development process in nova I faced with an issue related with config
 options. Now we have lists of config options and registering options mixed
 with source code in regular files.
  From one side it can be convenient: to have module-encapsulated config
 options. But problems appear when we need to use some config option in
 different modules/packages.

 If some option is registered in module X and module X imports module Y for
 some reasons...
 and in one day we need to import this option in module Y we will get
 exception
 NoSuchOptError on import_opt in module Y.
 Because of circular dependency.
 To resolve it we can move registering of this option in Y module(in the
 inappropriate place) or use other tricks.

 Isn't this use case what the import_opt() method of CONF is for?  The
 description given in the docstring is:

 Import a module and check that a given option is registered.

 This is intended for use with global configuration objects
 like cfg.CONF where modules commonly register options with
 CONF at module load time. If one module requires an option
 defined by another module it can use this method to explicitly
 declare the dependency.

 It's used all over the place in nova for this purpose, as far as I can
 see.

 I offer to create file options.py in each package and move all package's
 config options and registration code there.
 Such approach allows us to import any option in any place of nova without
 problems.

 The problem with this reorganization is that it moves the options from
 the place where they're primarily intended to be used.  This could make
 it harder to maintain, such as ensuring the help text is updated when
 the code is.  If nova were a smaller code base, I think it would make
 sense to reorganize in this fashion, but given how large nova actually
 is…
 --
 Kevin L. Mitchell kevin.mitch...@rackspace.com
 Rackspace


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rebuilding instances booted from volume

2015-07-02 Thread Roman Podoliaka
Hi ZhengZhenyu,

I'd say, it's more like a new feature and rebuild of volume-backed
instances is simply not implemented yet.

Though, I agree, that existing behaviour is rather confusing, as such
rebuild will be a no-op, and Nova won't report any errors either.

AFAIK, Eugeniya K. (CC'ed) started to work on this, maybe she will be
able to upload any WIP patches soon.

Thanks,
Roman

On Thu, Jul 2, 2015 at 7:11 AM, ZhengZhenyu zheng.zhe...@outlook.com wrote:
 Hi, All

 According to my test, Nova cannot rebuild Volume-Booted instances, patch:
 https://review.openstack.org/#/c/176891/
 fixes rebuild for instances launched using image and attached with volumes,
 but yet rebuilding an instance booted from volume
 is still not working.

 The rebuild action for volume-booted instances after implementing the above
 patch performs like this:

 The volumes are detached and attached again, the selected image/snapshot for
 rebuilding is actually useless.
 This means that if the /dev/vda for an instance booted from volume is broken
 for some reason, we cannot rebuild it from a new
 image or the snapshot of this instance (nova just detach and attach again
 the same volume).

 I don't know whether this is a bug or it is designed on purpose.

 Thanks,

 BR,
 Zheng

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] needed, driver for oslo.db thursday session

2015-05-21 Thread Roman Podoliaka
Will try to make it, but will probably be in 'read-only' mode, sorry :(

On Wed, May 20, 2015 at 5:59 PM, Mike Bayer mba...@redhat.com wrote:


 On 5/20/15 9:31 AM, Davanum Srinivas wrote:

 Thanks Jeremy,

 Mike, Roman, Victor, Please see remote connection details in:
 https://etherpad.openstack.org/p/YVR-oslo-db-plans

 The schedule time for the session is in:

 https://libertydesignsummit.sched.org/event/3571aa54b364c62e097da8cd32d97258

 Hope you can make it :) yes, please pick one of the 2 choices there
 (either sip or google hangout) and drop a note in the etherpad which
 one you want me to connect to


 probably google hangout, maybe my cats can join in that way also.

 confirming this is 2:20 PM PDT and 5:20 PM EDT for me.

 I've updated my calendar, talk to you tomorrow.






 thanks,
 dims


 On Tue, May 19, 2015 at 10:17 AM, Jeremy Stanley fu...@yuggoth.org
 wrote:

 On 2015-05-19 09:06:56 -0700 (-0700), Davanum Srinivas wrote:

 Ouch. Thanks for the heads up Roman

 We have https://wiki.openstack.org/wiki/Infrastructure/Conferencing
 which we used yesterday to successfully bridge Clark B. into an I18n
 tooling session via Jitsi over the normal conference wireless
 network with the built-in mic/speaker in Jim's laptop. Feel free to
 use it in your sessions, just try to pick a random conference number
 between 6000 and 7999 so nobody steps on the toes of other sessions
 which might be using it (maybe add your conference room number to
 6000 or something?). Let me or other Infra people know if you have
 any questions about or trouble using it!
 --
 Jeremy Stanley


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] needed, driver for oslo.db thursday session

2015-05-19 Thread Roman Podoliaka
Hi all,

Just FYI, due to problems with obtaining a Canadian visa, neither
Victor Sergeyev, nor I made it to Vancouver.

I hope someone else from Oslo team can replace Mike as a session driver.

Thanks,
Roman

On Tue, May 19, 2015 at 3:53 AM, Mike Bayer mba...@redhat.com wrote:
 Hello -

 It is my extreme displeasure and frustration to announce that due to an
 incredibly unfortunate choice of airline, I had to cancel my entire trip to
 the Openstack summit after spending 26 hours in my home airport waiting for
 my airline to produce a working airplane (which they did not).

 I will not be able to attend in person the Thursday oslo.db session I was to
 drive, so a replacement will be needed.  I am also lamenting not being able
 to meet so many of you who I hoped very much to meet.

 Hope you all enjoy the summit and I hope our paths can cross at future
 events.

 - mike



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Testing DB migrations

2015-03-06 Thread Roman Podoliaka
Hi all,

You could take a look at how this is done in OpenStack projects [1][2]

Most important parts:
1) use the same RDBMS you use in production
2) test migration scripts on data, not on empty schema
3) test corner cases (adding a NOT NULL column without a server side
default value, etc)
4) do a separate migration scripts run with large data sets to make
sure you don't introduce slow migrations [3]

Thanks,
Roman

[1] 
https://github.com/openstack/nova/blob/fb642be12ef4cd5ff9029d4dc71c7f5d5e50ce29/nova/tests/unit/db/test_migrations.py#L66-L833
[2] 
https://github.com/openstack/oslo.db/blob/0058c6510bfc6c41c830c38f3a30b5347a703478/oslo_db/sqlalchemy/test_migrations.py#L40-L273
[3] 
http://josh.people.rcbops.com/2013/12/third-party-testing-with-turbo-hipster/

On Fri, Mar 6, 2015 at 4:50 PM, Nikolay Markov nmar...@mirantis.com wrote:
 We already run unit tests only using real Postgresql. But this still doesn't
 answer the question how we should test migrations.

 On Fri, Mar 6, 2015 at 5:24 PM, Boris Bobrov bbob...@mirantis.com wrote:

 On Friday 06 March 2015 16:57:19 Nikolay Markov wrote:
  Hi everybody,
 
  From time to time some bugs appear regarding failed database migrations
  during upgrade and we have High-priority bug for 6.1 (
  https://bugs.launchpad.net/fuel/+bug/1391553) on testing this migration
  process. I want to start a thread for discussing how we're going to do
  it.
 
  I don't see any obvious solution, but we can at least start adding tests
  together with any changes in migrations, which will use a number of
  various
  fake environments upgrading and downgrading DB.
 
  Any thoughts?

 In Kyestone adding unit tests and running them in in-memory sqlite was
 proven
 ineffective.The only solution we've come to is to run all db-related tests
 against real rdbmses.

 --
 Best regards,
 Boris Bobrov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best regards,
 Nick Markov

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Roman Podoliaka
Hi all,

Mike, thanks for summarizing this in
https://wiki.openstack.org/wiki/PyMySQL_evaluation !

On PyMySQL: this is something we need to enable testing of oslo.db on
Python 3.x and PyPy. Though, I doubt we want to make PyMySQL the
default DB API driver for OpenStack services for Python 2.x. At least,
not until PyMySQL provides C-speedups for hot spots in the driver code
(I assume this can be done in eventlet/PyPy friendly way using cffi).
Otherwise, PyMySQL would be much slower than MySQL-Python for the
typical SQL queries we do (e.g. ask for *a lot* of data from the DB).

On native threads vs green threads: I very much like the Keystone
approach, which allows to run the service using either eventlet or
Apache. It would be great, if we could do that for other services as
well.

Thanks,
Roman

On Thu, Jan 29, 2015 at 5:54 PM, Ed Leafe e...@leafe.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 01/28/2015 06:57 PM, Johannes Erdfelt wrote:

 Not sure if it helps more than this explanation, but there was a
 blueprint and accompanying wiki page that explains the move from twisted
 to eventlet:

 Yeah, it was never threads vs. greenthreads. There was a lot of pushback
 to relying on Twisted, which many people found confusing to use, and
 more importantly, to follow when reading code. Whatever the performance
 difference may be, eventlet code is a lot easier to follow, as it more
 closely resembles single-threaded linear execution.

 - --

 - -- Ed Leafe
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.14 (GNU/Linux)

 iQIcBAEBAgAGBQJUyle8AAoJEKMgtcocwZqLRX4P/j3LhEhubBOJmWepfO6KrG9V
 KxltAshKRvZ9OAKlezprZh8N5XUcxRTDxLhxkYP6Qsaf1pxHacAIOhLRobnV5Y+A
 oF6E6eRORs13pUhLkL+EzZ07Kon+SjSmvDDZiIo+xe8OTbgfMpo5j7zMPeLJpcr0
 RUSS+knJ/ewNCMaX4gTwTY3sDYFZTbVuGHFUtjSgeoJP0T5aP05UR73xeT7/AsbW
 O8dOL4tX+6GcxIHyX4XdFH9hng1P2vBZJ5l8yV6BxB6U8xsiQjlsCpwrb8orYJ0r
 f7+YW0D0FHOvY/TV4dzsLC/2NGc2AwMszWL3kB/AzbUuDyMMDEGpbAS/VHDcyhZg
 l7zFKEQy+9UybVzWjw764hpzcUT/ICPbTBiX/QuN4qY9YTUNTlCNrRAslgM+cr2y
 x0/nb6cd+Qq21RPIygJ9HavRqOm8npF6HpUrE55Dn+3/OvfAftlWNPcdlXAjtDOt
 4WUFUoZjUTsNUjLlEiiTzgfJg7+eQqbR/HFubCpILFQgOlOAuZIDcr3g8a3yw7ED
 wt5UZz/89LDQqpF2TZX8lKFWxeKk1CnxWEWO208+E/700JS4xKHpnVi4tj18udsY
 AHnihUwGO4d9Q0i+TqbomnzyqOW6SM+gDUcahfJ92IJj9e13bqpbuoN/YMbpD/o8
 evOz8G3OeC/KaOgG5F/1
 =1M8U
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Roman Podoliaka
Jeremy,

I don't have exact numbers, so yeah, it's just an assumption based on
looking at the nova-api/scheduler logs with connection_debug set to
100.

But that's a good point you are making here: it will be interesting to
see what difference enabling of PyMySQL will make for tempest/rally
workloads, rather than just running synthetic tests. I'm going to give
it a try on my devstack installation.

Thanks,
Roman

On Thu, Jan 29, 2015 at 6:42 PM, Jeremy Stanley fu...@yuggoth.org wrote:
 On 2015-01-29 18:35:20 +0200 (+0200), Roman Podoliaka wrote:
 [...]
 Otherwise, PyMySQL would be much slower than MySQL-Python for the
 typical SQL queries we do (e.g. ask for *a lot* of data from the DB).
 [...]

 Is this assertion based on representative empirical testing (for
 example profiling devstack+tempest, or perhaps comparing rally
 benchmarks), or merely an assumption which still needs validating?
 --
 Jeremy Stanley

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] PyMySQL review

2015-01-29 Thread Roman Podoliaka
Mike,

I can't agree more: as far as we are concerned, every service is yet
another WSGI app. And it should be left up to operator, how to deploy
it.

So 'green thread awareness' (i.e. patching of the world) should go to
separate keystone|*-eventlet binary, while everyone else will still be
able to use it as a general WSGI app.

Thanks,
Roman

On Thu, Jan 29, 2015 at 6:55 PM, Mike Bayer mba...@redhat.com wrote:


 Roman Podoliaka rpodoly...@mirantis.com wrote:


 On native threads vs green threads: I very much like the Keystone
 approach, which allows to run the service using either eventlet or
 Apache. It would be great, if we could do that for other services as
 well.

 but why do we need two approaches to be at all explicit?   Basically, if you 
 write a WSGI application, you normally are writing non-threaded code with a 
 shared nothing approach.  Whether the WSGI app is used in a threaded apache 
 container or a gevent style uWSGI container is a deployment option.  This 
 shouldn’t be exposed in the code.





 Thanks,
 Roman

 On Thu, Jan 29, 2015 at 5:54 PM, Ed Leafe e...@leafe.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 01/28/2015 06:57 PM, Johannes Erdfelt wrote:

 Not sure if it helps more than this explanation, but there was a
 blueprint and accompanying wiki page that explains the move from twisted
 to eventlet:

 Yeah, it was never threads vs. greenthreads. There was a lot of pushback
 to relying on Twisted, which many people found confusing to use, and
 more importantly, to follow when reading code. Whatever the performance
 difference may be, eventlet code is a lot easier to follow, as it more
 closely resembles single-threaded linear execution.

 - --

 - -- Ed Leafe
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.14 (GNU/Linux)

 iQIcBAEBAgAGBQJUyle8AAoJEKMgtcocwZqLRX4P/j3LhEhubBOJmWepfO6KrG9V
 KxltAshKRvZ9OAKlezprZh8N5XUcxRTDxLhxkYP6Qsaf1pxHacAIOhLRobnV5Y+A
 oF6E6eRORs13pUhLkL+EzZ07Kon+SjSmvDDZiIo+xe8OTbgfMpo5j7zMPeLJpcr0
 RUSS+knJ/ewNCMaX4gTwTY3sDYFZTbVuGHFUtjSgeoJP0T5aP05UR73xeT7/AsbW
 O8dOL4tX+6GcxIHyX4XdFH9hng1P2vBZJ5l8yV6BxB6U8xsiQjlsCpwrb8orYJ0r
 f7+YW0D0FHOvY/TV4dzsLC/2NGc2AwMszWL3kB/AzbUuDyMMDEGpbAS/VHDcyhZg
 l7zFKEQy+9UybVzWjw764hpzcUT/ICPbTBiX/QuN4qY9YTUNTlCNrRAslgM+cr2y
 x0/nb6cd+Qq21RPIygJ9HavRqOm8npF6HpUrE55Dn+3/OvfAftlWNPcdlXAjtDOt
 4WUFUoZjUTsNUjLlEiiTzgfJg7+eQqbR/HFubCpILFQgOlOAuZIDcr3g8a3yw7ED
 wt5UZz/89LDQqpF2TZX8lKFWxeKk1CnxWEWO208+E/700JS4xKHpnVi4tj18udsY
 AHnihUwGO4d9Q0i+TqbomnzyqOW6SM+gDUcahfJ92IJj9e13bqpbuoN/YMbpD/o8
 evOz8G3OeC/KaOgG5F/1
 =1M8U
 -END PGP SIGNATURE-

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Openstack HTTP error codes

2015-01-29 Thread Roman Podoliaka
Hi Anne,

I think Eugeniya refers to a problem, that we can't really distinguish
between two different  badRequest (400) errors (e.g. wrong security
group name vs wrong key pair name when starting an instance), unless
we parse the error description, which might be error prone.

Thanks,
Roman

On Thu, Jan 29, 2015 at 6:46 PM, Anne Gentle
annegen...@justwriteclick.com wrote:


 On Thu, Jan 29, 2015 at 10:33 AM, Eugeniya Kudryashova
 ekudryash...@mirantis.com wrote:

 Hi, all


 Openstack APIs interact with each other and external systems partially by
 passing of HTTP errors. The only valuable difference between types of
 exceptions is HTTP-codes, but current codes are generalized, so external
 system can’t distinguish what actually happened.


 As an example two different failures below differs only by error message:


 request:

 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

 Host: 192.168.122.195:8774

 X-Auth-Project-Id: demo

 Accept-Encoding: gzip, deflate, compress

 Content-Length: 189

 Accept: application/json

 User-Agent: python-novaclient

 X-Auth-Token: 2cfeb9283d784cfba694f3122ef413bf

 Content-Type: application/json


 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: test, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name: bar}]}}

 response:

 HTTP/1.1 400 Bad Request

 Content-Length: 118

 Content-Type: application/json; charset=UTF-8

 X-Compute-Request-Id: req-a995e1fc-7ea4-4305-a7ae-c569169936c0

 Date: Fri, 23 Jan 2015 10:43:33 GMT


 {badRequest: {message: Security group bar not found for project
 790f5693e97a40d38c4d5bfdc45acb09., code: 400}}


 and


 request:

 POST /v2/790f5693e97a40d38c4d5bfdc45acb09/servers HTTP/1.1

 Host: 192.168.122.195:8774

 X-Auth-Project-Id: demo

 Accept-Encoding: gzip, deflate, compress

 Content-Length: 192

 Accept: application/json

 User-Agent: python-novaclient

 X-Auth-Token: 24c0d30ff76c42e0ae160fa93db8cf71

 Content-Type: application/json


 {server: {name: demo, imageRef:
 171c9d7d-3912-4547-b2a5-ea157eb08622, key_name: foo, flavorRef:
 42, max_count: 1, min_count: 1, security_groups: [{name:
 default}]}}

 response:

 HTTP/1.1 400 Bad Request

 Content-Length: 70

 Content-Type: application/json; charset=UTF-8

 X-Compute-Request-Id: req-87604089-7071-40a7-a34b-7bc56d0551f5

 Date: Fri, 23 Jan 2015 10:39:43 GMT


 {badRequest: {message: Invalid key_name provided., code: 400}}


 The former specifies an incorrect security group name, and the latter an
 incorrect keypair name. And the problem is, that just looking at the
 response body and HTTP response code an external system can’t understand
 what exactly went wrong. And parsing of error messages here is not the way
 we’d like to solve this problem.


 For the Compute API v 2 we have the shortened Error Code in the
 documentation at
 http://developer.openstack.org/api-ref-compute-v2.html#compute_server-addresses

 such as:

 Error response codes
 computeFault (400, 500, …), serviceUnavailable (503), badRequest (400),
 unauthorized (401), forbidden (403), badMethod (405), overLimit (413),
 itemNotFound (404), buildInProgress (409)

 Thanks to a recent update (well, last fall) to our build tool for docs.

 What we don't have is a table in the docs saying computeFault has this
 longer Description -- is that what you are asking for, for all OpenStack
 APIs?

 Tell me more.

 Anne




 Another example for solving this problem is AWS EC2 exception codes [1]


 So if we have some service based on Openstack projects it would be useful
 to have some concrete error codes(textual or numeric), which could allow to
 define what actually goes wrong and later correctly process obtained
 exception. These codes should be predefined for each exception, have
 documented structure and allow to parse exception correctly in each step of
 exception handling.


 So I’d like to discuss implementing such codes and its usage in openstack
 projects.


 [1] -
 http://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Anne Gentle
 annegen...@justwriteclick.com

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Integration with Ceph

2014-12-01 Thread Roman Podoliaka
Hi Sergey,

AFAIU, the problem is that when Nova was designed initially, it had no
notion of shared storage (e.g. Ceph), so all the resources were
considered to be local to compute nodes. In that case each total value
was a sum of values per node. But as we see now, that doesn't work
well with Ceph, when the storage is actually shared and doesn't belong
to any particular node.

It seems we've got two different, but related problems here:

1) resource tracking is incorrect, as nodes shouldn't report info
about storage when shared storage is used (fixing this by reporting
e.g. 0 values would require changes to nova-scheduler)

2) total storage is calculated incorrectly as we just sum the values
reported by each node

From my point of view, in order to fix both, it might make sense for
nova-api/nova-scheduler to actually know, if shared storage is used
and access Ceph directly (otherwise, it's not clear, which compute
node we should ask for this data, and what exactly we should ask, as
we don't actually know if the storage is shared in the context of
nova-api/nova-scheduler processes).

Thanks,
Roman

On Mon, Nov 24, 2014 at 3:45 PM, Sergey Nikitin sniki...@mirantis.com wrote:
 Hi,
 As you know we can use Ceph as ephemeral storage in nova. But we have some
 problems with its integration. First of all, total storage of compute nodes
 is calculated incorrectly. (more details here
 https://bugs.launchpad.net/nova/+bug/1387812). I want to fix this problem.
 Now size of total storage is only a sum of storage of all compute nodes. And
 information about the total storage is got directly from db.
 (https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L663-L691).
 To fix the problem we should check type of using storage. If type of storage
 is RBD we should get information about total storage directly from Ceph
 storage.
 I proposed a patch (https://review.openstack.org/#/c/132084/) which should
 fix this problem, but I got the fair comment that we shouldn't check type of
 storage on the API layer.

 The other problem is that information about size of compute node incorrect
 too. Now size of each node equal to size of whole Ceph cluster.

 On one hand it is good to do not check type of storage on the API layer, on
 the other hand there are some reasons to check it on API layer:
 1. It would be useful for live migration because now a user has to send
 information about storage with API request.
 2. It helps to fix problem with total storage.
 3. It helps to fix problem with size of compute nodes.

 So I want to ask you: Is this a good idea to get information about type of
 storage on API layer? If no - Is there are any ideas to get correct
 information about Ceph storage?

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.1.0 released

2014-11-18 Thread Roman Podoliaka
Matt,

This is really weird. Victor and I will take a closer look.

Thanks,
Roman

On Tue, Nov 18, 2014 at 5:22 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 11/17/2014 9:36 AM, Victor Sergeyev wrote:

 Hello All!

 Oslo team is pleased to announce the new release of Oslo database
 handling library - oslo.db 1.1.0

 List of changes:
 $ git log --oneline --no-merges  1.0.2..master
 1b0c2b1 Imported Translations from Transifex
 9aa02f4 Updated from global requirements
 766ff5e Activate pep8 check that _ is imported
 f99e1b5 Assert exceptions based on API, not string messages
 490f644 Updated from global requirements
 8bb12c0 Updated from global requirements
 4e19870 Reorganize DbTestCase to use provisioning completely
 2a6dbcd Set utf8 encoding for mysql and postgresql
 1b41056 ModelsMigrationsSync: Add check for foreign keys
 8fb696e Updated from global requirements
 ba4a881 Remove extraneous vim editor configuration comments
 33011a5 Remove utils.drop_unique_constraint()
 64f6062 Improve error reporting for backend import failures
 01a54cc Ensure create_engine() retries the initial connection test
 26ec2fc Imported Translations from Transifex
 9129545 Use fixture from oslo.config instead of oslo-incubator
 2285310 Move begin ping listener to a connect listener
 7f9f4f1 Create a nested helper function that will work on py3.x
 b42d8f1 Imported Translations from Transifex
 4fa3350 Start adding a environment for py34/py33
 b09ee9a Explicitly depend on six in requirements file
 7a3e091 Unwrap DialectFunctionDispatcher from itself.
 0928d73 Updated from global requirements
 696f3c1 Use six.wraps instead of functools.wraps
 8fac4c7 Update help string to use database
 fc8eb62 Use __qualname__ if we can
 6a664b9 Add description for test_models_sync function
 8bc1fb7 Use the six provided iterator mix-in
 436dfdc ModelsMigrationsSync:add correct server_default check for Enum
 2075074 Add history/changelog to docs
 c9e5fdf Add run_cross_tests.sh script

 Thanks Andreas Jaeger, Ann Kamyshnikova, Christian Berendt, Davanum
 Srinivas, Doug Hellmann, Ihar Hrachyshka, James Carey, Joshua Harlow,
 Mike Bayer, Oleksii Chuprykov, Roman Podoliaka for contributing to this
 release.

 Please report issues to the bug tracker:
 https://bugs.launchpad.net/oslo.db


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 And...the nova postgresql opportunistic DB tests are failing quite
 frequently due to some race introduced by the new library version [1].

 [1] https://bugs.launchpad.net/oslo.db/+bug/1393633

 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] Marker based paging

2014-11-03 Thread Roman Podoliaka
Hi Mike,

I think that code was taken from Nova (or maybe some other project) as
is and we haven't touched it since then.

Please speak up - we want to know about all possible problems with
current approach.

Thanks,
Roman

On Fri, Oct 31, 2014 at 2:58 PM, Heald, Mike mike.he...@hp.com wrote:
 Hi all,

 I'm implementing paging on storyboard, and I wanted to ask why we decided to 
 use marker based paging. I have some opinions on this, but I want to keep my 
 mouth shut until I find out what problem it was solving :)

 Thanks,
 Mike


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [novaclient] E12* rules

2014-10-20 Thread Roman Podoliaka
Hi Andrey,

Generally I'm opposed to such changes enabling random PEP8 checks, but
in this particular case I kind of like the fact you fix the mess with
indents in the code.

python-novaclient code base is fairly small, CI nodes are not
overloaded at this point of the release cycle, code looks better
now... FWIW, I'd +1 your patches :)

Thanks,
Roman

On Fri, Oct 17, 2014 at 4:40 PM, Andrey Kurilin akuri...@mirantis.com wrote:
 Hi everyone!

 I'm working on enabling E12* PEP8 rules in novaclient(status of my work
 listed below). Imo, PEP8 rules should be ignored only in extreme cases/for
 important reasons and we should decrease a number of ignored rules. This
 helps to keep code in more strict, readable form, which is very important
 when working in community.

 While working on rule E126, we started discussion with Joe Gordon about
 demand of these rules. I have no idea about reasons of why they should be
 ignored, so I want to know:
 - Why these rules should be ignored?
 - What do you think about enabling these rules?

 Please, leave your opinion about E12* rules.

 Already enabled rules:
   E121,E125 - https://review.openstack.org/#/c/122888/
   E122 - https://review.openstack.org/#/c/123830/
   E123 - https://review.openstack.org/#/c/123831/

 Abandoned rule:
   E124 - https://review.openstack.org/#/c/123832/

 Pending review:
   E126 - https://review.openstack.org/#/c/123850/
   E127 - https://review.openstack.org/#/c/123851/
   E128 - https://review.openstack.org/#/c/127559/
   E129 - https://review.openstack.org/#/c/123852/


 --
 Best regards,
 Andrey Kurilin.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [oslo] Proposed database connectivity patterns

2014-10-09 Thread Roman Podoliaka
Hi Mike,

Great stuff! Fixing the mess with transactions and their scope is
probably one of the most important tasks for us, IMO. I look forward
for this to be implemented in oslo.db and consuming projects!

Thanks,
Roman

On Thu, Oct 9, 2014 at 12:07 AM, Mike Bayer mba...@redhat.com wrote:
 Hi all -

 I’ve drafted up my next brilliant idea for how to get Openstack projects to 
 use SQLAlchemy more effectively.   The proposal here establishes significant 
 detail on what’s wrong with the current state of things, e.g. the way I see 
 EngineFacade, get_session() and get_engine() being used, and proposes a new 
 system that provides a true facade around a managed context.   But of course, 
 it requires that you all change your code!  (a little bit).  Based on just a 
 few tiny conversations on IRC so far, seems like this might be a hard sell.  
 But please note, most projects are pretty much broken in how they use the 
 database - this proposal is just a first step to making it all non-broken, if 
 not as amazing and cool as some people wish it could be.  Unbreaking the code 
 and removing boilerplate is the first step - with sane and declarative 
 patterns in place, we can then build in more bells and whistles.

 Hoping you’re all super curious now, here it is!  Jump in:  
 https://review.openstack.org/#/c/125181/

 - mike







 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Meeting time

2014-10-07 Thread Roman Podoliaka
Hi all,

Last Friday we decided to find a better time for the weekly team
meeting. Keeping in mind that DST ends soon (October the 26th in
Europe, November the 2nd in the US), I think, we can choose from:

- Mondays at 1600 UTC [1]: #openstack-meeting-alt, #openstack-meeting-3
- Thursdays at 1600 UTC [2]: #openstack-meeting-3
- Thursdays at 1700 UTC [3]: #openstack-meeting-3
- Fridays at 1600 UTC (current time, may be ok when DST ends) [4]:
#openstack-meeting-alt, #openstack-meeting-3

(assuming the information about meeting rooms availability provided
here [0] is correct)

Basically, anything earlier than 1600 UTC will be too early for those
who live in CA after November the 2nd. And 1700+ UTC is beer o'clock
here in Europe :)

Alternatively, we could ask Infra  to add the meeting bot directly to
#openstack-oslo channel and have the weekly meeting there on any day
we like around 1600 UTC (#openstack-oslo is not really crowded, so it
shouldn't be a problem to use it for a meeting once a week).

Please let me know what you think.

Thanks,
Roman

[0] 
https://www.google.com/calendar/ical/bj05mroquq28jhud58esggq...@group.calendar.google.com/public/basic.ics

[1] 
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014month=11day=3hour=16min=0sec=0p1=367p2=195p3=179p4=224

[2] 
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014month=11day=6hour=16min=0sec=0p1=367p2=195p3=179p4=224

[3] 
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014month=11day=6hour=17min=0sec=0p1=367p2=195p3=179p4=224

[4] 
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2014month=11day=7hour=16min=0sec=0p1=367p2=195p3=179p4=224

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Create an instance with a custom uuid

2014-09-24 Thread Roman Podoliaka
Hi Joe,

Tools like Pumphouse [1] (migrates workloads, e.g. instances, between
two OpenStack clouds) would benefit from supporting this (Pumphouse
would be able to replicate user instances in a new cloud up to their
UUIDs).

Are there any known gotchas with support of this feature in REST APIs
(in general)?

Thanks,
Roman

[1] https://github.com/MirantisLabs/pumphouse

On Wed, Sep 24, 2014 at 10:23 AM, Joe Gordon joe.gord...@gmail.com wrote:
 Whats the use case for this? We should be thorough when making API changes
 like this.

 On Wed, Sep 24, 2014 at 6:56 AM, joehuang joehu...@huawei.com wrote:

 +1.

 Or at least provide a way to specify an external UUID for the instance,
 and can retrieve the instance through the external UUID which may be linked
 to external system's object.

 Chaoyi Huang ( joehuang )
 
 发件人: Pasquale Porreca [pasquale.porr...@dektech.com.au]
 发送时间: 2014年9月24日 21:08
 收件人: openstack-dev@lists.openstack.org
 主题: [openstack-dev]  [nova] Create an instance with a custom uuid

 Hello

 I would like to be able to specify the UUID of an instance when I create
 it. I found this discussion about this matter:
 https://lists.launchpad.net/openstack/msg22387.html
 but I could not find any blueprint, anyway I understood this
 modification should not comport any particular issue.

 Would it be acceptable to pass the uuid as metadata, or should I instead
 modify the api if I want to set the UUID from the novaclient?

 Best regards

 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] 2 weeks in the bug tracker

2014-09-19 Thread Roman Podoliaka
Hi all,

FWIW, a quick and dirty solution is here: http://xsnippet.org/360188/ :)

Thanks,
Roman

On Fri, Sep 19, 2014 at 2:03 PM, Ben Nemec openst...@nemebean.com wrote:
 On 09/19/2014 08:13 AM, Sean Dague wrote:
 I've spent the better part of the last 2 weeks in the Nova bug tracker
 to try to turn it into something that doesn't cause people to run away
 screaming. I don't remember exactly where we started at open bug count 2
 weeks ago (it was north of 1400, with  200 bugs in new, but it might
 have been north of 1600), but as of this email we're at  1000 open bugs
 (I'm counting Fix Committed as closed, even though LP does not), and ~0
 new bugs (depending on the time of the day).

 == Philosophy in Triaging ==

 I'm going to lay out the philosophy of triaging I've had, because this
 may also set the tone going forward.

 A bug tracker is a tool to help us make a better release. It does not
 exist for it's own good, it exists to help. Which means when evaluating
 what stays in and what leaves we need to evaluate if any particular
 artifact will help us make a better release. But also more importantly
 realize that there is a cost for carrying every artifact in the tracker.
 Resolving duplicates gets non linearly harder as the number of artifacts
 go up. Triaging gets non-linearly hard as the number of artifacts go up.

 With this I was being somewhat pragmatic about closing bugs. An old bug
 that is just a stacktrace is typically not useful. An old bug that is a
 vague sentence that we should refactor a particular module (with no
 specifics on the details) is not useful. A bug reported against a very
 old version of OpenStack where the code has changed a lot in the
 relevant area, and there aren't responses from the author, is not
 useful. Not useful bugs just add debt, and we should get rid of them.
 That makes the chance of pulling a random bug off the tracker something
 that you could actually look at fixing, instead of mostly just stalling out.

 So I closed a lot of stuff as Invalid / Opinion that fell into those camps.

 == Keeping New Bugs at close to 0 ==

 After driving the bugs in the New state down to zero last week, I found
 it's actually pretty easy to keep it at 0.

 We get 10 - 20 new bugs a day in Nova (during a weekday). Of those ~20%
 aren't actually a bug, and can be closed immediately. ~30% look like a
 bug, but don't have anywhere near enough information in them, and
 flipping them to incomplete with questions quickly means we have a real
 chance of getting the right info. ~10% are fixable in  30 minutes worth
 of work. And the rest are real bugs, that seem to have enough to dive
 into it, and can be triaged into Confirmed, set a priority, and add the
 appropriate tags for the area.

 But, more importantly, this means we can filter bug quality on the way
 in. And we can also encourage bug reporters that are giving us good
 stuff, or even easy stuff, as we respond quickly.

 Recommendation #1: we adopt a 0 new bugs policy to keep this from
 getting away from us in the future.

 We have this policy in TripleO, and to help keep it fresh in people's
 minds Roman Podolyaka (IIRC) wrote an untriaged-bot for the IRC channel
 that periodically posts a list of any New bugs.  I've found it very
 helpful, so it's probably worth getting that into infra somewhere so
 other people can use it too.


 == Our worse bug reporters are often core reviewers ==

 I'm going to pick on Dan Prince here, mostly because I have a recent
 concrete example, however in triaging the bug queue much of the core
 team is to blame (including myself).

 https://bugs.launchpad.net/nova/+bug/1368773 is a terrible bug. Also, it
 was set incomplete and no response. I'm almost 100% sure it's a dupe of
 the multiprocess bug we've been tracking down but it's so terse that you
 can't get to the bottom of it.

 There were a ton of 2012 nova bugs that were basically post it notes.
 Oh, we should refactor this function. Full stop. While those are fine
 for personal tracking, their value goes to zero probably 3 months after
 they are files, especially if the reporter stops working on the issue at
 hand. Nova has plenty of wouldn't it be great if we...  ideas. I'm not
 convinced using bugs for those is useful unless we go and close them out
 aggressively if they stall.

 Also, if Nova core can't file a good bug, it's hard to set the example
 for others in our community.

 Recommendation #2: hey, Nova core, lets be better about filing the kinds
 of bugs we want to see! mkay!

 Recommendation #3: Let's create a tag for personal work items or
 something for these class of TODOs people are leaving themselves that
 make them a ton easier to cull later when they stall and no one else has
 enough context to pick them up.

 == Tags ==

 The aggressive tagging that Tracy brought into the project has been
 awesome. It definitely helps slice out into better functional areas.
 Here is the top of our current official tag list (and bug count):

[openstack-dev] [oslo][db] Nominating Mike Bayer for the oslo.db core reviewers team

2014-08-15 Thread Roman Podoliaka
Hi Oslo team,

I propose that we add Mike Bayer (zzzeek) to the oslo.db core reviewers team.

Mike is an author of SQLAlchemy, Alembic, Mako Templates and some
other stuff we use in OpenStack. Mike has been working on OpenStack
for a few months contributing a lot of good patches and code reviews
to oslo.db [1]. He has also been revising the db patterns in our
projects and prepared a plan how to solve some of the problems we have
[2].

I think, Mike would be a good addition to the team.

Thanks,
Roman

[1] 
https://review.openstack.org/#/q/owner:%22Michael+Bayer+%253Cmike_mp%2540zzzcomputing.com%253E%22,n,z
[2] https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db]A proposal for DB read/write separation

2014-08-08 Thread Roman Podoliaka
Hi Li,

How are you going to make this separation transparent? I mean,
generally, in a function code, you can't know in advance if the
transaction will be read-only or it will contain an
INSERT/UPDATE/DELETE statement. On the other hand, as a developer, you
could analyze the DB queries that can be possibly issued by this
function and mark the function somehow, so that oslo.db would know for
which database connection the transaction should be created, but this
is essentially what slave_connection option is for and how it works
now.

Secondly, as you said, the key thing here is to separate reads and
writes. In order to make reads fast/reduce the load on your 'writable'
database, you'd move reads to asynchronous replicas. But you can't do
this transparently either, as there is a lot of places in our code, in
which we assume we are using the latest state of data, while
asynchronous replicas might actually be a little bit out of date. So,
in case of slave_connection, we use it only when it's ok for the code
to work with outdated rows, i.e. *explicitly* modify the existing
functions to work with slave_connection.

Thanks,
Roman

On Fri, Aug 8, 2014 at 7:03 AM, Li Ma skywalker.n...@gmail.com wrote:
 Getting a massive amount of information from data storage to be displayed is
 where most of the activity happens in OpenStack. The two activities of reading
 data and writing (creating, updating and deleting) data are fundamentally
 different.

 The optimization for these two opposite database activities can be done by
 physically separating the databases that service these two different
 activities. All the writes go to database servers, which then replicates the
 written data to the database server(s) dedicated to servicing the reads.

 Currently, AFAIK, many OpenStack deployment in production try to take
 advantage of MySQL (includes Percona or MariaDB) multi-master Galera cluster.
 It is possible to design and implement a read/write separation schema
 for such a DB cluster.

 Actually, OpenStack has a method for read scalability via defining
 master_connection and slave_connection in configuration, but this method
 lacks of flexibility due to deciding master or slave in the logical
 context(code). It's not transparent for application developer.
 As a result, it is not widely used in all the OpenStack projects.

 So, I'd like to propose a transparent read/write separation method
 for oslo.db that every project may happily takes advantage of it
 without any code modification.

 Moreover, I'd like to put it in the mailing list in advance to
 make sure it is acceptable for oslo.db.

 I'd appreciate any comments.

 br.
 Li Ma


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] MyISAM as a default storage engine for MySQL in the gate

2014-07-21 Thread Roman Podoliaka
Hi all,

To my surprise I found that we default to using MyISAM in the gate
[1], while InnoDB would be a much more suitable choice, which people
use in production deployments (== we should test it in the gate). This
means, that every table, for which we haven't explicitly specified to
use InnoDB, will be created using MyISAM engine, which is clearly not
what we want (and we have migration scripts at least in Neutron which
don't specify InnoDB explicitly and rely on MySQL configuration
value).

Is there any specific reason we default to MyISAM? Or I should submit
a patch changing the default storage engine to be InnoDB?

Thanks,
Roman

[1] 
https://github.com/openstack-infra/config/blob/master/modules/openstack_project/manifests/slave_db.pp#L12

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] MyISAM as a default storage engine for MySQL in the gate

2014-07-21 Thread Roman Podoliaka
Aha, makes sense. Yeah, this means we miss such a check at least in
Neutron and should add one to the test suite. Thanks!

On Mon, Jul 21, 2014 at 6:34 PM, Clark Boylan clark.boy...@gmail.com wrote:

 On Jul 21, 2014 8:28 AM, Roman Podoliaka rpodoly...@mirantis.com wrote:

 Hi all,

 To my surprise I found that we default to using MyISAM in the gate
 [1], while InnoDB would be a much more suitable choice, which people
 use in production deployments (== we should test it in the gate). This
 means, that every table, for which we haven't explicitly specified to
 use InnoDB, will be created using MyISAM engine, which is clearly not
 what we want (and we have migration scripts at least in Neutron which
 don't specify InnoDB explicitly and rely on MySQL configuration
 value).

 Is there any specific reason we default to MyISAM? Or I should submit
 a patch changing the default storage engine to be InnoDB?

 We want projects to force the use of innodb over myisam. To test this the
 gate defaults to myisam and should check that innodb is used instead by the
 projects. So this is very intentional.

 Are we missing those checks in places?



 Thanks,
 Roman

 [1]
 https://github.com/openstack-infra/config/blob/master/modules/openstack_project/manifests/slave_db.pp#L12

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Clark


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-09 Thread Roman Podoliaka
Hi Ihar,

AFAIU, the switch is a matter of pip install + specifying the correct
db URI in the config files. I'm not sure why you are filing a spec in
Neutron project. IMHO, this has nothing to do with projects, but
rather a purely deployment question. E.g. don't we have
PostgreSQL+psycopg2 or MySQL+pymysql deployments of OpenStack right
now?

I think what you really want is to change the defaults we test in the
gate, which is a different problem.

Thanks,
Roman

On Wed, Jul 9, 2014 at 2:17 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Hi all,

 Multiple projects are suffering from db lock timeouts due to deadlocks
 deep in mysqldb library that we use to interact with mysql servers. In
 essence, the problem is due to missing eventlet support in mysqldb
 module, meaning when a db lock is encountered, the library does not
 yield to the next green thread, allowing other threads to eventually
 unlock the grabbed lock, and instead it just blocks the main thread,
 that eventually raises timeout exception (OperationalError).

 The failed operation is not retried, leaving failing request not
 served. In Nova, there is a special retry mechanism for deadlocks,
 though I think it's more a hack than a proper fix.

 Neutron is one of the projects that suffer from those timeout errors a
 lot. Partly it's due to lack of discipline in how we do nested calls
 in l3_db and ml2_plugin code, but that's not something to change in
 foreseeable future, so we need to find another solution that is
 applicable for Juno. Ideally, the solution should be applicable for
 Icehouse too to allow distributors to resolve existing deadlocks
 without waiting for Juno.

 We've had several discussions and attempts to introduce a solution to
 the problem. Thanks to oslo.db guys, we now have more or less clear
 view on the cause of the failures and how to easily fix them. The
 solution is to switch mysqldb to something eventlet aware. The best
 candidate is probably MySQL Connector module that is an official MySQL
 client for Python and that shows some (preliminary) good results in
 terms of performance.

 I've posted a Neutron spec for the switch to the new client in Juno at
 [1]. Ideally, switch is just a matter of several fixes to oslo.db that
 would enable full support for the new driver already supported by
 SQLAlchemy, plus 'connection' string modified in service configuration
 files, plus documentation updates to refer to the new official way to
 configure services for MySQL. The database code won't, ideally,
 require any major changes, though some adaptation for the new client
 library may be needed. That said, Neutron does not seem to require any
 changes, though it was revealed that there are some alembic migration
 rules in Keystone or Glance that need (trivial) modifications.

 You can see how trivial the switch can be achieved for a service based
 on example for Neutron [2].

 While this is a Neutron specific proposal, there is an obvious wish to
 switch to the new library globally throughout all the projects, to
 reduce devops burden, among other things. My vision is that, ideally,
 we switch all projects to the new library in Juno, though we still may
 leave several projects for K in case any issues arise, similar to the
 way projects switched to oslo.messaging during two cycles instead of
 one. Though looking at how easy Neutron can be switched to the new
 library, I wouldn't expect any issues that would postpone the switch
 till K.

 It was mentioned in comments to the spec proposal that there were some
 discussions at the latest summit around possible switch in context of
 Nova that revealed some concerns, though they do not seem to be
 documented anywhere. So if you know anything about it, please comment.

 So, we'd like to hear from other projects what's your take on that
 move, whether you see any issues or have concerns about it.

 Thanks for your comments,
 /Ihar

 [1]: https://review.openstack.org/#/c/104905/
 [2]: https://review.openstack.org/#/c/105209/
 -BEGIN PGP SIGNATURE-
 Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

 iQEcBAEBCgAGBQJTvSS+AAoJEC5aWaUY1u57uk0IAMNIW4e59fU8uiF7eg8KwIgU
 5vjzDP4GX454Oxm0h5q3Olc0nXIeB6zSBGDoomgLk9+4AS250ihGRA/V10iDEJTF
 yubcvknep/ZfF+lKkmgBA3WNXJgTXffXeN2bimC6t5zJA+8Cmn3lUPu0djt0GWs7
 AktufkPbVj7ZauN6w9OpW4AnoZX1fARvynCilTuHYu+lb8nQ/Hatqu5dgqdeDyRp
 jodLoN1ow3VYR7Cq5jocqhw719aiKLJdlUgWVHNL5A5oTR1Uxu0AdleeUzXVUvFm
 +EQO0Xe+slMSBgzBsgPJAiX0Vkc6kfJdFHR571QUWCXaXF1nUEIkgra/7j+0Uqs=
 =bgds
 -END PGP SIGNATURE-

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][all] switch from mysqldb to another eventlet aware mysql client

2014-07-09 Thread Roman Podoliaka
Hi all,

Not sure what issues you are talking about, but I just replaced
mysql with mysql+mysqlconnector in my db connection string  in
neutron.conf and neutron-db-manage upgrade head worked like a charm
for an empty schema.

Ihar, could please elaborate on what changes to oslo.db are needed?
(as an oslo.db developer I'm very interested in this part :) )

Thanks,
Roman

On Wed, Jul 9, 2014 at 5:43 PM, Ihar Hrachyshka ihrac...@redhat.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 On 09/07/14 15:40, Sean Dague wrote:
 On 07/09/2014 09:00 AM, Roman Podoliaka wrote:
 Hi Ihar,

 AFAIU, the switch is a matter of pip install + specifying the
 correct db URI in the config files. I'm not sure why you are
 filing a spec in Neutron project. IMHO, this has nothing to do
 with projects, but rather a purely deployment question. E.g.
 don't we have PostgreSQL+psycopg2 or MySQL+pymysql deployments of
 OpenStack right now?

 I think what you really want is to change the defaults we test in
 the gate, which is a different problem.

 Because this is really a *new* driver. As you can see by the
 attempted run, it doesn't work with alembic given the definitions
 that neutron has. So it's not like this is currently compatible
 with OpenStack code.

 Well, to fix that, you just need to specify raise_on_warnings=False
 for connection (it's default for mysqldb but not mysql-connector).
 I've done it in devstack patch for now, but probably it belongs to
 oslo.db.



 Thanks, Roman

 On Wed, Jul 9, 2014 at 2:17 PM, Ihar Hrachyshka
 ihrac...@redhat.com wrote: Hi all,

 Multiple projects are suffering from db lock timeouts due to
 deadlocks deep in mysqldb library that we use to interact with
 mysql servers. In essence, the problem is due to missing eventlet
 support in mysqldb module, meaning when a db lock is encountered,
 the library does not yield to the next green thread, allowing
 other threads to eventually unlock the grabbed lock, and instead
 it just blocks the main thread, that eventually raises timeout
 exception (OperationalError).

 The failed operation is not retried, leaving failing request not
 served. In Nova, there is a special retry mechanism for
 deadlocks, though I think it's more a hack than a proper fix.

 Neutron is one of the projects that suffer from those timeout
 errors a lot. Partly it's due to lack of discipline in how we do
 nested calls in l3_db and ml2_plugin code, but that's not
 something to change in foreseeable future, so we need to find
 another solution that is applicable for Juno. Ideally, the
 solution should be applicable for Icehouse too to allow
 distributors to resolve existing deadlocks without waiting for
 Juno.

 We've had several discussions and attempts to introduce a
 solution to the problem. Thanks to oslo.db guys, we now have more
 or less clear view on the cause of the failures and how to easily
 fix them. The solution is to switch mysqldb to something eventlet
 aware. The best candidate is probably MySQL Connector module that
 is an official MySQL client for Python and that shows some
 (preliminary) good results in terms of performance.

 I've posted a Neutron spec for the switch to the new client in
 Juno at [1]. Ideally, switch is just a matter of several fixes to
 oslo.db that would enable full support for the new driver already
 supported by SQLAlchemy, plus 'connection' string modified in
 service configuration files, plus documentation updates to refer
 to the new official way to configure services for MySQL. The
 database code won't, ideally, require any major changes, though
 some adaptation for the new client library may be needed. That
 said, Neutron does not seem to require any changes, though it was
 revealed that there are some alembic migration rules in Keystone
 or Glance that need (trivial) modifications.

 You can see how trivial the switch can be achieved for a service
 based on example for Neutron [2].

 While this is a Neutron specific proposal, there is an obvious
 wish to switch to the new library globally throughout all the
 projects, to reduce devops burden, among other things. My vision
 is that, ideally, we switch all projects to the new library in
 Juno, though we still may leave several projects for K in case
 any issues arise, similar to the way projects switched to
 oslo.messaging during two cycles instead of one. Though looking
 at how easy Neutron can be switched to the new library, I
 wouldn't expect any issues that would postpone the switch till
 K.

 It was mentioned in comments to the spec proposal that there were
 some discussions at the latest summit around possible switch in
 context of Nova that revealed some concerns, though they do not
 seem to be documented anywhere. So if you know anything about it,
 please comment.

 So, we'd like to hear from other projects what's your take on
 that move, whether you see any issues or have concerns about it.

 Thanks for your comments, /Ihar

 [1]: https://review.openstack.org/#/c

Re: [openstack-dev] Moving neutron to oslo.db

2014-07-04 Thread Roman Podoliaka
Ben,

Neutron was updated to the latest version of db code from
oslo-incubator. That's probably all.

Thanks,
Roman

On Thu, Jul 3, 2014 at 8:10 PM, Ben Nemec openst...@nemebean.com wrote:
 +27, -2401

 Wow, that's pretty painless.  Were there earlier patches to Neutron to
 prepare for the transition or was it really that easy?

 On 07/03/2014 07:34 AM, Salvatore Orlando wrote:
 No I was missing everything and kept wasting time because of alembic.

 This will teach me to keep my mouth shut and don't distract people who are
 actually doing good work.

 Thanks for doings this work.

 Salvatore


 On 3 July 2014 14:15, Roman Podoliaka rpodoly...@mirantis.com wrote:

 Hi Salvatore,

 I must be missing something. Hasn't it been done in
 https://review.openstack.org/#/c/103519/? :)

 Thanks,
 Roman

 On Thu, Jul 3, 2014 at 2:51 PM, Salvatore Orlando sorla...@nicira.com
 wrote:
 Hi,

 As you surely now, in Juno oslo.db will graduate [1]
 I am currently working on the port. It's been already cleared that making
 alembic migrations idempotent and healing the DB schema is a
 requirement
 for this task.
 These two activities are tracked by the blueprints [2] and [3].
 I think we've seen enough in Openstack to understand that there is no
 chance
 of being able to do the port to oslo.db in Juno.

 While blueprint [2] is already approved, I suggest to target also [3] for
 Juno so that we might be able to port neutron to oslo.db as soon as K
 opens.
 I expect this port to be not as invasive as the one for oslo.messaging
 which
 required quite a lot of patches.

 Salvatore

 [1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [2] https://review.openstack.org/#/c/95738/
 [3] https://review.openstack.org/#/c/101963/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving neutron to oslo.db

2014-07-03 Thread Roman Podoliaka
Hi Salvatore,

I must be missing something. Hasn't it been done in
https://review.openstack.org/#/c/103519/? :)

Thanks,
Roman

On Thu, Jul 3, 2014 at 2:51 PM, Salvatore Orlando sorla...@nicira.com wrote:
 Hi,

 As you surely now, in Juno oslo.db will graduate [1]
 I am currently working on the port. It's been already cleared that making
 alembic migrations idempotent and healing the DB schema is a requirement
 for this task.
 These two activities are tracked by the blueprints [2] and [3].
 I think we've seen enough in Openstack to understand that there is no chance
 of being able to do the port to oslo.db in Juno.

 While blueprint [2] is already approved, I suggest to target also [3] for
 Juno so that we might be able to port neutron to oslo.db as soon as K opens.
 I expect this port to be not as invasive as the one for oslo.messaging which
 required quite a lot of patches.

 Salvatore

 [1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [2] https://review.openstack.org/#/c/95738/
 [3] https://review.openstack.org/#/c/101963/

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on June, 17

2014-06-18 Thread Roman Podoliaka
Hi Fuelers,

Not directly related to bug squashing day, but something to keep in mind.

AFAIU, both MOS and Fuel bugs are currently tracked under
https://bugs.launchpad.net/fuel/ Launchpad project page. Most bugs
filed there are probably deployment-specific, but still I bet there is
a lot of bugs in OS projects you run into. If you could tag those
using OS projects names (e.g. you already have the 'neutron' tag, but
not 'nova' one) when triaging new bugs, that would greatly help us to
find and fix them in both MOS and upstream projects.

Thanks,
Roman

On Wed, Jun 18, 2014 at 8:04 AM, Mike Scherbakov
mscherba...@mirantis.com wrote:
 Fuelers,
 please pay attention to stalled in progress bugs too - those which are In
 progress for more than a week. See [1].


 [1]
 https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=date_last_updatedsearch=Searchfield.status%3Alist=INPROGRESSassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on


 On Wed, Jun 18, 2014 at 8:43 AM, Mike Scherbakov mscherba...@mirantis.com
 wrote:

 Thanks for participation, folks.
 Current count:
 New - 12
 Incomplete - 30
 Confirmed / Triaged / in progress for 5.1 - 368

 I've not logged how many bugs we had, but calculated that 26 bugs were
 filed over last 24 hours.

 Overall, seems to be we did a good job in triaging, but results for fixing
 bugs are not that impressive. I'm inclined to think about another run, let's
 say, next Tuesday.



 On Tue, Jun 17, 2014 at 7:12 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:

 Current count:
 New - 56
 Incomplete - 48
 Confirmed/Triaged/In progress for 5.1 - 331

 Let's squash as many as we can!


 On Mon, Jun 16, 2014 at 6:16 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:

 Fuelers,
 as we discussed during last IRC meeting, I'm scheduling bug squashing
 day on Tuesday, June 17th.

 I'd like to propose the following order of bugs processing:

 Confirm / triage bugs in New status, assigning them to yourself to avoid
 the situation when a few people work on same bug
 Review bugs in Incomplete status, move them to Confirmed / Triaged or
 close as Invalid.
 Follow https://wiki.openstack.org/wiki/BugTriage for the rest (this is
 MUST read for those who have not done it yet)

 When we are more or less done with triaging, we can start proposing
 fixes for bugs. I suggest to extensively use #fuel-dev IRC for
 synchronization, and while someone fixes some bugs - the other one can
 participate in review of fixes. Don't hesitate to ask for code reviews.

 Regards,
 --
 Mike Scherbakov
 #mihgen




 --
 Mike Scherbakov
 #mihgen




 --
 Mike Scherbakov
 #mihgen




 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Bug squashing day on June, 17

2014-06-18 Thread Roman Podoliaka
Hi guys,

Dmitry, I have nothing against using 'also affects', but
unfortunately, it seems that Launchpad advanced search doesn't allow
to filter by affected projects :( (my use case is to be able to list
only bugs affecting Nova in MOS, and as long as we deploy stable
releases rather than trunk, upstream Nova bugs aren't always
applicable or just have lower priority for us).

Mike, cool, I didn't know https://launchpad.net/mos existed!  I'm all
for using it rather than spamming you guys with purely MOS/OS bugs :)
So we should probably ask QAs to start filing those against MOS now.
But per project tags can still be useful due to Launchpad advanced
search limitations.

Thanks,
Roman

On Thu, Jun 19, 2014 at 5:29 AM, Mike Scherbakov
mscherba...@mirantis.com wrote:
 Actually I agree on tagging bugs as Roman suggests.
 If no one against, we can create official tags for every project (nova,
 neutron, etc.) - as long as it simplifies life and easy to use, I'm all for
 it.


 On Thu, Jun 19, 2014 at 6:26 AM, Mike Scherbakov mscherba...@mirantis.com
 wrote:

 +1 to this approach.
 Actually we've just created separate LP project for MOS:
 https://launchpad.net/mos,
 and all bugs related to openstack / linux code (not Fuel), should be
 tracked there.
 I still think that we should also adding other OpenStack projects by
 clicking on also affects where possible.


 On Thu, Jun 19, 2014 at 1:30 AM, Dmitry Borodaenko
 dborodae...@mirantis.com wrote:

 Roman,

 What do you think about adding OS projects into the bug as also
 affects? That allows to track upstream and downstream state of the bug
 separately while maintaing visibility of both on the same page. The only
 downside is spamming the bug with comments related to different projects,
 but I think it's a reasonable trade off, you can't have too much information
 about a bug :)

 -DmitryB


 On Wed, Jun 18, 2014 at 2:04 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:

 Hi Fuelers,

 Not directly related to bug squashing day, but something to keep in
 mind.

 AFAIU, both MOS and Fuel bugs are currently tracked under
 https://bugs.launchpad.net/fuel/ Launchpad project page. Most bugs
 filed there are probably deployment-specific, but still I bet there is
 a lot of bugs in OS projects you run into. If you could tag those
 using OS projects names (e.g. you already have the 'neutron' tag, but
 not 'nova' one) when triaging new bugs, that would greatly help us to
 find and fix them in both MOS and upstream projects.

 Thanks,
 Roman

 On Wed, Jun 18, 2014 at 8:04 AM, Mike Scherbakov
 mscherba...@mirantis.com wrote:
  Fuelers,
  please pay attention to stalled in progress bugs too - those which are
  In
  progress for more than a week. See [1].
 
 
  [1]
 
  https://bugs.launchpad.net/fuel/+bugs?field.searchtext=orderby=date_last_updatedsearch=Searchfield.status%3Alist=INPROGRESSassignee_option=anyfield.assignee=field.bug_reporter=field.bug_commenter=field.subscriber=field.structural_subscriber=field.tag=field.tags_combinator=ANYfield.has_cve.used=field.omit_dupes.used=field.omit_dupes=onfield.affects_me.used=field.has_patch.used=field.has_branches.used=field.has_branches=onfield.has_no_branches.used=field.has_no_branches=onfield.has_blueprints.used=field.has_blueprints=onfield.has_no_blueprints.used=field.has_no_blueprints=on
 
 
  On Wed, Jun 18, 2014 at 8:43 AM, Mike Scherbakov
  mscherba...@mirantis.com
  wrote:
 
  Thanks for participation, folks.
  Current count:
  New - 12
  Incomplete - 30
  Confirmed / Triaged / in progress for 5.1 - 368
 
  I've not logged how many bugs we had, but calculated that 26 bugs
  were
  filed over last 24 hours.
 
  Overall, seems to be we did a good job in triaging, but results for
  fixing
  bugs are not that impressive. I'm inclined to think about another
  run, let's
  say, next Tuesday.
 
 
 
  On Tue, Jun 17, 2014 at 7:12 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
 
  Current count:
  New - 56
  Incomplete - 48
  Confirmed/Triaged/In progress for 5.1 - 331
 
  Let's squash as many as we can!
 
 
  On Mon, Jun 16, 2014 at 6:16 AM, Mike Scherbakov
  mscherba...@mirantis.com wrote:
 
  Fuelers,
  as we discussed during last IRC meeting, I'm scheduling bug
  squashing
  day on Tuesday, June 17th.
 
  I'd like to propose the following order of bugs processing:
 
  Confirm / triage bugs in New status, assigning them to yourself to
  avoid
  the situation when a few people work on same bug
  Review bugs in Incomplete status, move them to Confirmed / Triaged
  or
  close as Invalid.
  Follow https://wiki.openstack.org/wiki/BugTriage for the rest (this
  is
  MUST read for those who have not done it yet)
 
  When we are more or less done with triaging, we can start proposing
  fixes for bugs. I suggest to extensively use #fuel-dev IRC for
  synchronization, and while someone fixes some bugs - the other one
  can
  participate in review of fixes. Don't hesitate to ask for code
  reviews.
 
  Regards,
  --
  Mike Scherbakov

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-08 Thread Roman Podoliaka
Hi Deva,

I haven't actually touched Ironic db migrations tests code yet, but
your feedback is very valuable for oslo.db maintainers, thank you!

So currently, there are two ways to run migrations tests:
1. Opportunistically (using openstack_citest user credentials; this is
how we test migrations in the gates in Nova/Glance/Cinder/etc). I'm
surprised we don't provide this out-of-box in common db code.
2. By providing database credentials in test_migrations.conf
(Nova/Glance/Cinder/etc have test_migrations.conf, though I haven't
ever tried to put mysql/postgresql credentials there and run unit
tests).

The latter came to common db code directly from Nova and I'm not sure
if anyone is using the incubator version of it in the consuming
projects. Actually, I'd really like us to drop this feature and stick
to the opportunistic tests of migrations (fyi, there is a patch on
review to oslo.db [1]) to ensure there is only one way to run the
migrations tests and this the way we run the tests in the gates.

[1] uses opportunistic DB test cases provided by oslo.db to prevent
race conditions: a db is created on demand per test (which is
obviously not fast, but safe and easy). And it's perfectly normal to
use a separate db per migrations test case, as this is a kind of a
test that needs total control on the database, which can not be
provided even by using of high transaction isolation levels (so
unfortunately we can't use the solution proposed by Mike here).

Migration tests using test_migrations.conf, on the other hand, leave
it up to you how to isolate separate test cases using the same
database. You could use file locks, put them on each conflicting test
case to prevent race conditions, but this is not really handy, of
course.

Overall, I think, this is a good example of a situation when we put
code to incubator when it wasn't really ready to be reused by other
projects. We should have added docs at least on how to use those
migrations tests properly. This is something we should become better
at as a team.

Ok, so at least we know about the problem and [1] should make it
easier for everyone in the consuming projects to run their migrations
tests.

Thanks,
Roman

[1] https://review.openstack.org/#/c/93424/

On Sat, Jun 7, 2014 at 3:12 AM, Devananda van der Veen
devananda@gmail.com wrote:
 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread (remember, it calls
 _reset_databases) blows up the other test.

 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table
 'test_migrations.alembic_version' doesn't exist)

 As far as I can tell, this is all coming from:

 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111


 So, Ironic devs -- if you see a DB migration proposed, pay extra attention
 to it. We aren't running migration tests in our check or gate queues right
 now, and we shouldn't enable them until this fixed.

 Regards,
 Devananda

 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-08 Thread Roman Podoliaka
Hi Mike,

 However, when testing an application that uses a fixed set of tables, as 
 should be the case for the majority if not all Openstack apps, there’s no 
 reason that these tables need to be recreated for every test.

This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer mba...@redhat.com wrote:

 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen devananda@gmail.com
 wrote:

 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until recently
 when Lucas tried to add a DB migration and noticed that it didn't run... So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they should
 report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   No notice given when db migrations are not run due to missing engine

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then opportunistically checks each back-end,
 if it's available. This opportunistic checking was inherited from Nova so
 that tests could pass on developer workstations where not all backends are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an exception
 before the test code could call calling _have_mysql()
 - test_mysql_connect_fail() actually failed! Again, because setUp() raises
 an exception before running the test itself

 Unfortunately, there's one more problem... when I run the tests in parallel,
 they fail randomly because sometimes two test threads run different
 migration tests, and the setUp() for one thread (remember, it calls
 _reset_databases) blows up the other test.

 Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
 database exists
   ProgrammingError: (ProgrammingError) (1146, Table
 'test_migrations.alembic_version' doesn't exist)

 As far as I can tell, this is all coming from:

 https://github.com/openstack/oslo-incubator/blob/master/openstack/common/db/sqlalchemy/test_migrations.py#L86;L111


 Hello -

 Just an introduction, I’m Mike Bayer, the creator of SQLAlchemy and Alembic
 migrations. I’ve just joined on as a full time Openstack contributor,
 and trying to help improve processes such as these is my primary
 responsibility.

 I’ve had several conversations already about how migrations are run within
 test suites in various openstack projects.   I’m kind of surprised by this
 approach of dropping and recreating the whole database for individual tests.
 Running tests in parallel is obviously made very difficult by this style,
 but even beyond that, a lot of databases don’t respond well to lots of
 dropping/rebuilding of tables and/or databases in any case; while SQLite and
 MySQL are probably the most forgiving of this, 

Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Roman Podoliaka
Hi Matt,

We're waiting for a few important fixes to be merged (usage of
oslo.config, eventlet tpool support). Once those are merged, we'll cut
the initial release.

Thanks,
Roman

On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Are there any status updates on where we are with this [1]?  I see that
 oslo.db is in git.openstack.org now [2].  There is a super-alpha dev package
 on pypi [3], are we waiting for an official release?

 I'd like to start moving nova over to using oslo.db or at least get an idea
 for how much work it's going to be.  I don't imagine it's going to be that
 difficult since I think a lot of the oslo.db code originated in nova.

 [1] https://review.openstack.org/#/c/91407/
 [2] http://git.openstack.org/cgit/openstack/oslo.db/
 [3] https://pypi.python.org/pypi/oslo.db/0.0.1.dev15.g7efbf12


 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][db] oslo.db repository review request

2014-05-30 Thread Roman Podoliaka
Hi Sergey,

tl;dr

I'd like to be a ready to use version, but not 1.0.0.

So it's a good question and I'd like to hear more input on this from all.

If we start from 1.0.0, this will mean that we'll be very limited in
terms of changes to public API we can make without bumping the MAJOR
part of the version number. I don't expect the number of those changes
to be big, but I also don't want us to happen in a situation when we
have oslo.db 3.0.0 in a few months (if we follow semver
pragmatically).

Perhaps, we should stick to 0.MINOR.PATCH versioning for now (as e.g.
SQLAlchemy and TripleO projects do)? These won't be alphas, but rather
ready to use versions. And we would still have a bit more 'freedom' to
do small API changes bumping the MINOR part of the version number (we
could also do intermediate releases deprecating some stuff, so we
don't break people projects every time we make some API change).

Thanks,
Roman

On Fri, May 30, 2014 at 6:06 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Hey Roman,

 will it be the alpha version that should not be used by other projects
 or it'll be ready to use?

 Thanks.

 On Fri, May 30, 2014 at 6:36 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Matt,

 We're waiting for a few important fixes to be merged (usage of
 oslo.config, eventlet tpool support). Once those are merged, we'll cut
 the initial release.

 Thanks,
 Roman

 On Fri, May 30, 2014 at 5:19 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:


 On 4/25/2014 7:46 AM, Doug Hellmann wrote:

 On Fri, Apr 25, 2014 at 8:33 AM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:



 On 4/18/2014 1:18 PM, Doug Hellmann wrote:


 Nice work, Victor!

 I left a few comments on the commits that were made after the original
 history was exported from the incubator. There were a couple of small
 things to address before importing the library, and a couple that can
 wait until we have the normal code review system. I'd say just add new
 commits to fix the issues, rather than trying to amend the existing
 commits.

 We haven't really discussed how to communicate when we agree the new
 repository is ready to be imported, but it seems reasonable to use the
 patch in openstack-infra/config that will be used to do the import:
 https://review.openstack.org/#/c/78955/

 Doug

 On Fri, Apr 18, 2014 at 10:28 AM, Victor Sergeyev
 vserge...@mirantis.com wrote:


 Hello all,

 During Icehouse release cycle our team has been working on splitting of
 openstack common db code into a separate library blueprint [1]. At the
 moment the issues, mentioned in this bp and [2] are solved and we are
 moving
 forward to graduation of oslo.db. You can find the new oslo.db code at
 [3]

 So, before moving forward, I want to ask Oslo team to review oslo.db
 repository [3] and especially the commit, that allows the unit tests to
 pass
 [4].

 Thanks,
 Victor

 [1] https://blueprints.launchpad.net/oslo/+spec/oslo-db-lib
 [2] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
 [3] https://github.com/malor/oslo.db
 [4]


 https://github.com/malor/oslo.db/commit/276f7570d7af4a7a62d0e1ffb4edf904cfbf0600

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 I'm probably just late to the party, but simple question: why is it in
 the
 malor group in github rather than the openstack group, like
 oslo.messaging
 and oslo.rootwrap?  Is that temporary or will it be moved at some point?


 This is the copy of the code being prepared to import into a new
 oslo.db repository. It's easier to set up that temporary hosting on
 github. The repo has been approved to be imported, and after that
 happens it will be hosted on our git server like all of the other oslo
 libraries.

 Doug


 --

 Thanks,

 Matt Riedemann



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Are there any status updates on where we are with this [1]?  I see that
 oslo.db is in git.openstack.org now [2].  There is a super-alpha dev package
 on pypi [3], are we waiting for an official release?

 I'd like to start moving nova over to using oslo.db or at least get an idea
 for how much work it's going to be.  I don't imagine it's going to be that
 difficult since I think a lot of the oslo.db code originated in nova.

 [1] https://review.openstack.org/#/c/91407/
 [2] http://git.openstack.org/cgit/openstack/oslo.db/
 [3] https://pypi.python.org/pypi/oslo.db

Re: [openstack-dev] [nova] plan for moving to using oslo.db

2014-05-12 Thread Roman Podoliaka
Hi all,

Yes, once the oslo.db initial release is cut, we expect the migration
from using of its oslo-incubator version to a library one to be as
simple as following the steps you've mentioned. Though, we still need
to finish the setup of oslo.db repo (AFAIK, this is currently blocked
by the fact we don't run gate tests for oslo.db patches. Doug, Victor,
please correct me, if I'm wrong).

Thanks,
Roman

On Mon, May 5, 2014 at 7:47 AM, Matt Riedemann
mrie...@linux.vnet.ibm.com wrote:
 Just wanted to get some thoughts down while they are in my head this
 morning.

 Oslo DB is now a library [1].  I'm trying to figure out what the steps are
 to getting Nova to using that so we can rip out the sync'ed common db code.

 1. Looks like it's not in global-requirements yet [2], so that's probably a
 first step.

 2. We'll want to cut a sqlalchemy-migrate release once this patch is merged
 [3]. This moves a decent chunk of unique constraint patch code out of oslo
 and into sqlalchemy-migrate where it belongs so we can run unit tests with
 sqlite to drop unique constraints.

 3. Rip this [4] out of oslo.db once migrate is updated and released.

 4. Replace nova.openstack.common.db with oslo.db.

 5. ???

 6. Profit!

 Did I miss anything?

 [1] http://git.openstack.org/cgit/openstack/oslo.db/
 [2]
 http://git.openstack.org/cgit/openstack/requirements/tree/global-requirements.txt
 [3] https://review.openstack.org/#/c/87773/
 [4] https://review.openstack.org/#/c/31016/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Summit] Neutron etherpad

2014-05-01 Thread Roman Podoliaka
Hi all,

Following the mailing list thread started by Marios I've put some
initial questions to discuss into this etherpad document:

https://etherpad.openstack.org/p/juno-summit-tripleo-neutron

You are encouraged to take a look at it and add your thoughts and/or
questions :)

Thanks,
Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-25 Thread Roman Podoliaka
Hi all,

 Wouldn't it be better to make this label more persistent?

+1. It's really annoying to press Work in Progress button every time
you upload a new patch set.

Thanks,
Roman

On Fri, Apr 25, 2014 at 11:02 AM, Yuriy Taraday yorik@gmail.com wrote:
 Hello.

 On Wed, Apr 23, 2014 at 2:40 AM, James E. Blair jebl...@openstack.org
 wrote:

 * The new Workflow label will have a -1 Work In Progress value which
   will replace the Work In Progress button and review state.  Core
   reviewers and change owners will have permission to set that value
   (which will be removed when a new patchset is uploaded).


 Wouldn't it be better to make this label more persistent?
 As I remember there were some ML threads about keeping WIP mark across patch
 sets. There were even talks about changing git-review to support this.
 How about we make it better with the new version of Gerrit?

 --

 Kind regards, Yuriy.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo removal of use_tpool conf option

2014-04-18 Thread Roman Podoliaka
Hi all,

 I objected to this and asked (more demanded) for this to be added back into 
 oslo. It was not. What I did not realize when I was reviewing this nova 
 patch, was that nova had already synced oslo’s change. And now we’ve 
 released Icehouse with a conf option missing that existed in Havana. 
 Whatever projects were using oslo’s DB API code has had this option 
 disappear (unless an alternative was merged). Maybe it’s only nova.. I 
 don’t know.

First, I'm very sorry that Nova Icehouse release was cut with this
option missing. Whether it actually works or not, we should always
ensure we preserve backwards compatibility. I should have insisted on
making this sync from oslo-incubator 'atomic' in the first place, so
that tpool option was removed from openstack/common code and re-added
to Nova code in one commit, not two. So it's clearly my fault as a
reviewer who has made the original change to oslo-incubator.
Nevertheless, the patch re-adding this to Nova has been on review
since December the 3rd. Can we ensure it lands to master ASAP and will
be backported to icehouse?

On removing this option from oslo.db originally. As I've already
responded to your comment on review, I believe, oslo.db should neither
know, nor care if you use eventlet/gevent/OS threads/multiple
processes/callbacks/etc for handling concurrency. For the very same
reason SQLAlchemy doesn't do that. It just can't (and should not) make
such decisions for you. At the same time, eventlet provides a very
similar feature out-of-box. And
https://review.openstack.org/#/c/59760/  reuses it in Nova.

 unless you really want to live with DB calls blocking the whole process. I 
 know I don’t

Me neither. But the way we've been dealing with this in Nova and other
projects is having multiple workers processing those queries. I know,
it's not perfect, but it's what we default to (what folks use in
production mostly) and what we test. And, as we all know, something
that is untested, is broken. If eventlet tpool was a better option, I
believe, we would default to it. On the other hand, this seems to be a
fundamental issue of MySQLdb-python DB API driver. A pure python
driver (it would use more CPU time of course), as well as psycopg2
would work just fine. Probably, it's the MySQLdb-python we should fix,
rather than focusing on using of a work around provided by eventlet.

Once again, sorry for breaking things. Let's fix this and try not to
repeat the same mistakes in the future.

Thanks,
Roman

On Fri, Apr 18, 2014 at 4:42 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 Thanks for the good explanation, was just a curiosity of mine.

 Any idea why it has taken so long for the eventlet folks to fix this (I know
 u proposed a patch/patches a while ago)? Is eventlet really that
 unmaintained? :(

 From: Chris Behrens cbehr...@codestud.com
 Date: Thursday, April 17, 2014 at 4:59 PM
 To: Joshua Harlow harlo...@yahoo-inc.com
 Cc: Chris Behrens cbehr...@codestud.com, OpenStack Development Mailing
 List (not for usage questions) openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] oslo removal of use_tpool conf option


 On Apr 17, 2014, at 4:26 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:

 Just an honest question (no negativity intended I swear!).

 If a configuration option exists and only works with a patched eventlet why
 is that option an option to begin with? (I understand the reason for the
 patch, don't get me wrong).


 Right, it’s a valid question. This feature has existed one way or another in
 nova for quite a while. Initially the implementation in nova was wrong. I
 did not know that eventlet was also broken at the time, although I
 discovered it in the process of fixing nova’s code. I chose to leave the
 feature because it’s something that we absolutely need long term, unless you
 really want to live with DB calls blocking the whole process. I know I
 don’t. Unfortunately the bug in eventlet is out of our control. (I made an
 attempt at fixing it, but it’s not 100%. Eventlet folks currently have an
 alternative up that may or may not work… but certainly is not in a release
 yet.)  We have an outstanding bug on our side to track this, also.

 The below is comparing apples/oranges for me.

 - Chris


 Most users would not be able to use such a configuration since they do not
 have this patched eventlet (I assume a newer version of eventlet someday in
 the future will have this patch integrated in it?) so although I understand
 the frustration around this I don't understand why it would be an option in
 the first place. An aside, if the only way to use this option is via a
 non-standard eventlet then how is this option tested in the community, aka
 outside of said company?

 An example:

 If yahoo has some patched kernel A that requires an XYZ config turned on in
 openstack and the only way to take advantage of kernel A is with XYZ config
 'on', then it seems like that’s a yahoo only patch that is not testable and
 useable for 

Re: [openstack-dev] [fuel] oddness with sqlalchemy db().refresh(object)

2014-04-13 Thread Roman Podoliaka
Hi Andrew,

I believe, it's just the way SQLAlchemy Session works: all the changes
you've made within a session aren't propagated to the db (read: no
actual queries are issued) until you explicitly do:

- flush(), or
- commit() (as commit() calls flush() first), or
- Query - one(), first(), all(), update(), delete() - as these are
actions that can only be performed by contacting the db.

 db().refresh(task_provision) call appeared to be reseting the object

Yes, and this is exactly what it is supposed to do: get the current
state of the model instance from the db (basically: select * from
model_table where id = instance_primary_key_value). This means, that
all the changes you've made, but haven't flushed yet, will be lost.
I've made a small snippet to see this in action:
http://xsnippet.org/359888/  (with logging of SQL queries enabled)

I hope this helps. I'm just wondering, why would you want to call
refresh() there?

Thanks,
Roman

On Sat, Apr 12, 2014 at 1:33 AM, Andrew Woodward xar...@gmail.com wrote:
 Recently in one of my changes [1] I was fighting with one of the unit
 tests showing a failure for a test which should have been outside the
 sphere of influence.

 Traceback (most recent call last):
   File 
 /home/andreww/.virtualenvs/fuel/local/lib/python2.7/site-packages/mock.py,
 line 1190, in patched
 return func(*args, **keywargs)
   File 
 /home/andreww/git/fuel-web/nailgun/nailgun/test/integration/test_task_managers.py,
 line 65, in test_deployment_task_managers
 self.assertEquals(provision_task.weight, 0.4)
 AssertionError: 1.0 != 0.4

 After walking through a number of times and finally playing with it we
 where able to find that the db().refresh(task_provision) call appeared
 to be reseting the object. This caused the loss of the weight being
 set to 0.4 (1.0 is the model default). db().commit(), db().flush() and
 no call to db all caused the test to pass again.

 Does anyone have any input on why this would occur? The oddly odd part
 is that this test doesn't fail outside of the change set in [1]

 [1] https://review.openstack.org/#/c/78406/8/nailgun/nailgun/task/manager.py

 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Split Oslo Incubator?

2014-04-08 Thread Roman Podoliaka
Hi Victor,

 The openstack.common module also known as Oslo Incubator or OpenStack 
 Common Libraries has 44 dependencies. IMO we reach a point where it became 
 too huge. Would it be possible to split it into smaller parts and 
 distribute it on PyPI with a stable API? I don't know Olso Incubator enough 
 to suggest the best granularity. A hint can be the number of dependencies.

This is exactly what we've been doing in Icehouse (and are going to
continue to do this in Juno). In terms of oslo-incubator it's called
'graduation' of an incubator's part - it becomes a full-fledged
library distributed via PyPi.

 Sharing code is a good idea, but now we have SQLAchmey, WSGI, 
 cryptographic, RPC, etc. in the same module. Who needs all these features 
 at once? Olso Incubator must be usable outside OpenStack.

Sure! But I'd say even now one can use/sync only the particular
modules of oslo-incubator he/she needs. Though, I agree, releasing
these modules as libraries would simplify reusing of the code.

 We should now maybe move code from Oslo Incubator to upstream projects. 
 For example, timeutils extends the iso8601 module. We should maybe 
 contribute to this project and replace usage of timeutils with directy call 
 to iso8601?

Agreed. I can't say for other libraries, but in oslo.db we've been
contributing features and bug fixes to SQLAlchemy, alembic and
SQLAlchemy-migrate. But we are still going to have some code, that
won't be merged by upstream, just because it covers a too specific use
case for them (e.g. 'deleted' column which is provided by one of
oslo.db models mixin).

Thanks,
Roman

On Tue, Apr 8, 2014 at 1:35 PM, Victor Stinner
victor.stin...@enovance.com wrote:
 (Follow-up of the [olso] use of the oslo namespace package thread)

 Hi,

 The openstack.common module also known as Oslo Incubator or OpenStack
 Common Libraries has 44 dependencies. IMO we reach a point where it became
 too huge. Would it be possible to split it into smaller parts and distribute
 it on PyPI with a stable API? I don't know Olso Incubator enough to suggest
 the best granularity. A hint can be the number of dependencies.

 Sharing code is a good idea, but now we have SQLAchmey, WSGI, cryptographic,
 RPC, etc. in the same module. Who needs all these features at once? Olso
 Incubator must be usable outside OpenStack.


 Currently, Oslo Incubator is installed and updated manually using a
 update.sh script which copy .py files and replace openstack.common with
 nova.openstack.common (where nova is the name of the project where Oslo
 Incubator is installed).

 I guess that update.sh was written to solve the two following points, tell me
 if I'm wrong:

  - unstable API: the code changes too often, whereas users don't want to
 update their code regulary. Nova has maybe an old version of Olso Incubator
 because of that.

  - only copy a few files to avoid a lot of dependencies and copy useless files

 Smaller modules should solve these issues. They should be used as module:
 installed system-wide, not copied in each project. So fixing a bug would only
 require a single change, without having to synchronize each project.


 Yesterday, I proposed to add a new time_monotonic() function to the timeutils
 module. We asked me to enhance existing modules (like Monotime).

 We should now maybe move code from Oslo Incubator to upstream projects. For
 example, timeutils extends the iso8601 module. We should maybe contribute to
 this project and replace usage of timeutils with directy call to iso8601?

 Victor

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo][Neutron] Tripleo Neutron

2014-04-07 Thread Roman Podoliaka
Hi all,

Perhaps, we should file a design session for Neutron-specific questions?

 1. Define a neutron node (tripleo-image-elements/disk-image-builder) and 
 make sure it deploys and scales ok (tripleo-heat-templates/tuskar). This 
 comes under by lifeless blueprint at 
 https://blueprints.launchpad.net/tripleo/+spec/tripleo-tuskar-deployment-scaling-topologies

As far as I understand, this must be pretty straightforward: just
reuse the neutron elements we need when building an image for a
neutron node.

 2. HA the neutron node. For each neutron services/agents of interest 
 (neutron-dhcp-agent, neutron-l3-agent, neutron-lbaas-agent ... ) fix any 
 issues with running these in HA - perhaps there are none \o/? Useful 
 whether using a dedicated Neutron node or just for HA the 
 undercloud-control node

- HA for DHCP-agent is provided out-of-box - we can just use
'dhcp_agents_per_network' option
(https://github.com/openstack/tripleo-image-elements/blob/master/elements/neutron/os-apply-config/etc/neutron/neutron.conf#L59)

- for L3-agent there is a BP started, but the patches haven't been
merged yet  - 
https://blueprints.launchpad.net/neutron/+spec/l3-high-availability

- API must be no different from other API services we have

 3. Does it play with Ironic OK? I know there were some issues with Ironic 
 and Neutron DHCP, though I think this has now been addressed. Other 
 known/unkown bugs/issues with Ironic/Neutron - the baremetal driver will be 
 deprecated at some point...

You must be talking about specifying PXE boot options by the means of
neutron-dhcp-agent. Yes, this has been merged to Neutron for a while
now (https://review.openstack.org/#/c/30441/).

Thanks,
Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-14 Thread Roman Podoliaka
Hi all,

 Worth noting that there have been a few cases of projects patching OSLO 
 bugs intheir own tree rather than fixing in OSLO then resyncing. If anybody 
 has any tooling that can detect that, I'd love to see the results.

They shouldn't have done that :(

I totally agree, that 'syncing from incubator' strategy of reusing
common code isn't pretty, but this is what we have now. And oslo team
has been working hard to graduate libraries from incubator and then
reuse them in target projects as any other 3rd party libraries.
Hopefully, we'll no longer need to sync code from incubator soon.

Thanks,
Roman


On Fri, Mar 14, 2014 at 9:48 AM, Duncan Thomas duncan.tho...@gmail.com wrote:
 On 13 March 2014 21:13, Roman Podoliaka rpodoly...@mirantis.com wrote:
 Hi Steven,

 Code from openstack/common/ dir is 'synced' from oslo-incubator. The
 'sync' is effectively a copy of oslo-incubator subtree into a project
 source tree. As syncs are not done at the same time, the code of
 synced modules may indeed by different for each project depending on
 which commit of oslo-incubator was synced.


 Worth noting that there have been a few cases of projects patching
 OSLO bugs intheir own tree rather than fixing in OSLO then resyncing.
 If anybody has any tooling that can detect that, I'd love to see the
 results.

 I'm generally of the opinion that cinder is likely to be resistant to
 more parts of OSLO being used in cinder unless they are a proper
 library - syncs have caused us significant pain, code churn, review
 load and bugs in the last 12 months. I am but one voice among many,
 but I know I'm not the only member of core who feels this to be the
 case. Hopefully I can spend some time with OSLO core at the summit and
 discuss the problems I've found.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Duplicate code for processing REST APIs

2014-03-13 Thread Roman Podoliaka
Hi Steven,

Code from openstack/common/ dir is 'synced' from oslo-incubator. The
'sync' is effectively a copy of oslo-incubator subtree into a project
source tree. As syncs are not done at the same time, the code of
synced modules may indeed by different for each project depending on
which commit of oslo-incubator was synced.

Thanks,
Roman

On Thu, Mar 13, 2014 at 2:03 PM, Steven Kaufer kau...@us.ibm.com wrote:
 While investigating some REST API updates I've discovered that there is a
 lot of duplicated code across the various OpenStack components.

 For example, the paginate_query function exists in all these locations and
 there are a few slight differences between most of them:

 https://github.com/openstack/ceilometer/blob/master/ceilometer/openstack/common/db/sqlalchemy/utils.py#L61
 https://github.com/openstack/cinder/blob/master/cinder/openstack/common/db/sqlalchemy/utils.py#L37
 https://github.com/openstack/glance/blob/master/glance/openstack/common/db/sqlalchemy/utils.py#L64
 https://github.com/openstack/heat/blob/master/heat/openstack/common/db/sqlalchemy/utils.py#L62
 https://github.com/openstack/keystone/blob/master/keystone/openstack/common/db/sqlalchemy/utils.py#L64
 https://github.com/openstack/neutron/blob/master/neutron/openstack/common/db/sqlalchemy/utils.py#L61
 https://github.com/openstack/nova/blob/master/nova/openstack/common/db/sqlalchemy/utils.py#L64

 Does anyone know if there is any work going on to move stuff like this into
 oslo and then deprecate these functions?  There are also many functions that
 process the REST API request parameters (getting the limit, marker, sort
 data, etc.) that are also replicated across many components.

 If no existing work is done in this area, how should this be tackled?  As a
 blueprint for Juno?

 Thanks,

 Steven Kaufer
 Cloud Systems Software
 kau...@us.ibm.com 507-253-5104
 Dept HMYS / Bld 015-2 / G119 / Rochester, MN 55901


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack vs. SQLA 0.9

2014-03-13 Thread Roman Podoliaka
Hi all,

I think it's actually not that hard to fix the errors we have when
using SQLAlchemy 0.9.x releases.

I uploaded two changes two Nova to fix unit tests:
- https://review.openstack.org/#/c/80431/ (this one should also fix
the Tempest test run error)
- https://review.openstack.org/#/c/80432/

Thanks,
Roman

On Thu, Mar 13, 2014 at 7:41 PM, Thomas Goirand z...@debian.org wrote:
 On 03/14/2014 02:06 AM, Sean Dague wrote:
 On 03/13/2014 12:31 PM, Thomas Goirand wrote:
 On 03/12/2014 07:07 PM, Sean Dague wrote:
 Because of where we are in the freeze, I think this should wait until
 Juno opens to fix. Icehouse will only be compatible with SQLA 0.8, which
 I think is fine. I expect the rest of the issues can be addressed during
 Juno 1.

 -Sean

 Sean,

 No, it's not fine for me. I'd like things to be fixed so we can move
 forward. Debian Sid has SQLA 0.9, and Jessie (the next Debian stable)
 will be released SQLA 0.9 and with Icehouse, not Juno.

 We're past freeze, and this requires deep changes in Nova DB to work. So
 it's not going to happen. Nova provably does not work with SQLA 0.9, as
 seen in Tempest tests.

   -Sean

 I'd be nice if we considered more the fact that OpenStack, at some
 point, gets deployed on top of distributions... :/

 Anyway, if we can't do it because of the freeze, then I will have to
 carry the patch in the Debian package. Never the less, someone will have
 to work and fix it. If you know how to help, it'd be very nice if you
 proposed a patch, even if we don't accept it before Juno opens.

 Thomas Goirand (zigo)


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-10 Thread Roman Podoliaka
Hi Chris,

AFAIK, most OpenStack projects enforce tables to be created with the
encoding set to UTF-8 because MySQL has horrible defaults and would
use latin1 otherwise. PostgreSQL must default to the locale of a
system on which it's running. And, I think, most systems default to
UTF-8 nowadays.

Actually, I can't think of a reason, why would you want to use
anything else than UTF-8 for storing and exchanging of textual data.
I'd recommend to reconsider your encoding settings for PostgreSQL.

Thanks,
Roman

On Mon, Mar 10, 2014 at 10:24 AM, Chris Friesen
chris.frie...@windriver.com wrote:

 Hi,

 I'm using havana and recent we ran into an issue with heat related to
 character sets.

 In heat/db/sqlalchemy/api.py in user_creds_get() we call
 _decrypt() on an encrypted password stored in the database and then try to
 convert the result to unicode.  Today we hit a case where this errored out
 with the following message:

 UnicodeDecodeError: 'utf8' codec can't decode byte 0xf2 in position 0:
 invalid continuation byte

 We're using postgres and currently all the databases are using SQL_ASCII as
 the charset.

 I see that in icehouse heat will complain if you're using mysql and not
 using UTF-8.  There doesn't seem to be any checks for other databases
 though.

 It looks like devstack creates most databases as UTF-8 but uses latin1 for
 nova/nova_bm/nova_cell.  I assume this is because nova expects to migrate
 the db to UTF-8 later.  Given that those migrations specify a character set
 only for mysql, when using postgres should we explicitly default to UTF-8
 for everything?

 Thanks,
 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] UTF-8 required charset/encoding for openstack database?

2014-03-10 Thread Roman Podoliaka
Hi all,

 It sounds like for the near future my best bet would be to just set the 
 install scripts to configure postgres (which is used only for openstack) to 
 default to utf-8.  Is that a fair summation?

Yes. UTF-8 is a reasonable default value.

Thanks,
Roman

On Mon, Mar 10, 2014 at 1:36 PM, Chris Friesen
chris.frie...@windriver.com wrote:
 On 03/10/2014 02:02 PM, Ben Nemec wrote:

 We just had a discussion about this in #openstack-oslo too.  See the
 discussion starting at 2014-03-10T16:32:26

 http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2014-03-10.log


 In that discussion dhellmann said, I wonder if we make any assumptions
 elsewhere that we are using utf8 in the database

 For what it's worth I came across
 https://wiki.openstack.org/wiki/Encoding;, which proposed a rule:

 All external text that is not explicitly encoded (database storage,
 commandline arguments, etc.) should be presumed to be encoded as utf-8.


 While it seems Heat does require utf8 (or at least matching character
 sets) across all tables, I'm not sure the current solution is good.  It
 seems like we may want a migration to help with this for anyone who
 might already have mismatched tables.  There's a lot of overlap between
 that discussion and how to handle Postgres with this, I think.


 I'm lucky enough to be able to fix this now, I don't have any real
 deployments yet.

 It sounds like for the near future my best bet would be to just set the
 install scripts to configure postgres (which is used only for openstack) to
 default to utf-8.  Is that a fair summation?

 Chris


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][db][performance] Proposal: Get rid of soft deletion (step by step)

2014-03-10 Thread Roman Podoliaka
Hi all,

 I've never understood why we treat the DB as a LOG (keeping deleted == 0 
 records around) when we should just use a LOG (or similar system) to begin 
 with instead.

I can't agree more with you! Storing deleted records in tables is
hardly usable, bad for performance (as it makes tables and indexes
larger) and it probably covers a very limited set of use cases (if
any) of OpenStack users.

 One of approaches that I see is in step by step removing deleted column 
 from every table with probably code refactoring.

So we have a homework to do: find out what for projects use
soft-deletes. I assume that soft-deletes are only used internally and
aren't exposed to API users, but let's check that. At the same time
all new projects should avoid using of soft-deletes from the start.

Thanks,
Roman

On Mon, Mar 10, 2014 at 2:44 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:
 Sounds like a good idea to me.

 I've never understood why we treat the DB as a LOG (keeping deleted == 0
 records around) when we should just use a LOG (or similar system) to begin
 with instead.

 Does anyone use the feature of switching deleted == 1 back to deleted = 0?
 Has this worked out for u?

 Seems like some of the feedback on
 https://etherpad.openstack.org/p/operators-feedback-mar14 also suggests that
 this has been a operational pain-point for folks (Tool to delete things
 properly suggestions and such…).

 From: Boris Pavlovic bpavlo...@mirantis.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, March 10, 2014 at 1:29 PM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org,
 Victor Sergeyev vserge...@mirantis.com
 Subject: [openstack-dev] [all][db][performance] Proposal: Get rid of soft
 deletion (step by step)

 Hi stackers,

 (It's proposal for Juno.)

 Intro:

 Soft deletion means that records from DB are not actually deleted, they are
 just marked as a deleted. To mark record as a deleted we put in special
 table's column deleted record's ID value.

 Issue 1: Indexes  Queries
 We have to add in every query AND deleted == 0 to get non-deleted records.
 It produce performance issue, cause we should add it in any index one
 extra column.
 As well it produce extra complexity in db migrations and building queries.

 Issue 2: Unique constraints
 Why we store ID in deleted and not True/False?
 The reason is that we would like to be able to create real DB unique
 constraints and avoid race conditions on insert operation.

 Sample: we Have table (id, name, password, deleted) we would like to put in
 column name only unique value.

 Approach without UC: if count(`select  where name = name`) == 0:
 insert(...)
 (race cause we are able to add new record between )

 Approach with UC: try: insert(...) except Duplicate: ...

 So to add UC we have to add them on (name, deleted). (to be able to make
 insert/delete/insert with same name)

 As well it produce performance issues, because we have to use Complex unique
 constraints on 2  or more columns. + extra code  complexity in db
 migrations.

 Issue 3: Garbage collector

 It is really hard to make garbage collector that will have good performance
 and be enough common to work in any case for any project.
 Without garbage collector DevOps have to cleanup records by hand, (risk to
 break something). If they don't cleanup DB they will get very soon
 performance issue.

 To put in a nutshell most important issues:
 1) Extra complexity to each select query  extra column in each index
 2) Extra column in each Unique Constraint (worse performance)
 3) 2 Extra column in each table: (deleted, deleted_at)
 4) Common garbage collector is required


 To resolve all these issues we should just remove soft deletion.

 One of approaches that I see is in step by step removing deleted column
 from every table with probably code refactoring.  Actually we have 3
 different cases:

 1) We don't use soft deleted records:
 1.1) Do .delete() instead of .soft_delete()
 1.2) Change query to avoid adding extra deleted == 0 to each query
 1.3) Drop deleted and deleted_at columns

 2) We use soft deleted records for internal stuff e.g. periodic tasks
 2.1) Refactor code somehow: E.g. store all required data by periodic task in
 some special table that has: (id, type, json_data) columns
 2.2) On delete add record to this table
 2.3-5) similar to 1.1, 1.2, 13

 3) We use soft deleted records in API
 3.1) Deprecated API call if it is possible
 3.2) Make proxy call to ceilometer from API
 3.3) On .delete() store info about records in (ceilometer, or somewhere
 else)
 3.4-6) similar to 1.1, 1.2, 1.3

 This is not ready RoadMap, just base thoughts to start the constructive
 discussion in the mailing list, so %stacker% your opinion is very important!


 Best regards,
 Boris Pavlovic


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Roman Podoliaka
Hi all,

So yeah, we could restore the option and put creation of a slave
engine instance to EngineFacade class, but I don't think we want this.

The only reason why slave connections aren't implemented e.g. in
SQLAlchemy is that, SQLAlchemy, as a library can't decide  for you how
those engines should be used: do you have an ACTIVE-ACTIVE setup or
ACTIVE-PASSIVE, to which database reads/writes must go, and so on. The
same is true for oslo.db.

Nova is the only project that uses slave_connection option and it was
kind of broken: nova bare metal driver uses a separate database and
there was no way to use a slave db connection for it.

So due to lack of consistency in using of slave connection, IMO, this
should be left up to application to decide, how to use it. And we
provide EngineFacade helper already. So I'd just say, create an
EngineFacade instance for a slave connection explicitly, if you want
it to be used like it is used in Nova right now.

Thanks,
Roman

On Wed, Mar 5, 2014 at 8:35 AM, Doug Hellmann
doug.hellm...@dreamhost.com wrote:



 On Wed, Mar 5, 2014 at 10:43 AM, Alexei Kornienko
 alexei.kornie...@gmail.com wrote:

 Hello Darren,

 This option is removed since oslo.db will no longer manage engine objects
 on it's own. Since it will not store engines it cannot handle query
 dispatching.

 Every project that wan't to use slave_connection will have to implement
 this logic (creation of the slave engine and query dispatching) on it's own.


 If we are going to have multiple projects using that feature, we will have
 to restore it to oslo.db. Just because the primary API won't manage global
 objects doesn't mean we can't have a secondary API that does.

 Doug




 Regards,


 On 03/05/2014 05:18 PM, Darren Birkett wrote:

 Hi,

 I'm wondering why in this commit:


 https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f

 ...the slave_connection option was removed.  It seems like a useful option
 to have, even if a lot of projects weren't yet using it.

 Darren


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-03 Thread Roman Podoliaka
Hi all,

This is just one another example of MySQL not having production ready
defaults. The original idea was to force setting the SQL mode to
TRADITIONAL in code in projects using oslo.db code when they are ready
(unit and functional tests pass). So the warning was actually for
developers rather than for users.

Sync of the latest oslo.db code will make users able to set any SQL mode
you like (default is TRADITIONAL now, so the warning is gone).

Thanks,
Roman

On Mar 2, 2014 8:36 PM, John Griffith john.griff...@solidfire.com wrote:




 On Sun, Mar 2, 2014 at 7:42 PM, Sean Dague s...@dague.net wrote:

 Coming in at slightly less than 1 million log lines in the last 7 days:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOX0=

 This application has not enabled MySQL traditional mode, which means
 silent data corruption may occur

 This is being generated by  *.openstack.common.db.sqlalchemy.session in
 at least nova, glance, neutron, heat, ironic, and savana


http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhpcyBhcHBsaWNhdGlvbiBoYXMgbm90IGVuYWJsZWQgTXlTUUwgdHJhZGl0aW9uYWwgbW9kZSwgd2hpY2ggbWVhbnMgc2lsZW50IGRhdGEgY29ycnVwdGlvbiBtYXkgb2NjdXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5MzgxNDExMzcyOSwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6Im1vZHVsZSJ9


 At any rate, it would be good if someone that understood the details
 here could weigh in about whether is this really a true WARNING that
 needs to be fixed or if it's not, and just needs to be silenced.

 -Sean

 --
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I came across this earlier this week when I was looking at this in
Cinder, haven't completely gone into detail here, but maybe Florian or Doug
have some insight?

 https://bugs.launchpad.net/oslo/+bug/1271706

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-28 Thread Roman Podoliaka
Hi Clark, all,

https://review.openstack.org/#/c/76634/ has been merged, but I still
get 'command denied' errors [1].

Is there something else, that must be done before we can use new
privileges of openstack_citest user?

Thanks,
Roman

[1] 
http://logs.openstack.org/63/74963/4/check/gate-oslo-incubator-python27/e115a5f/console.html

On Wed, Feb 26, 2014 at 11:54 AM, Roman Podoliaka
rpodoly...@mirantis.com wrote:
 Hi Clark,

 I think we can safely GRANT ALL on *.* to openstack_citest@localhost and 
 call that good enough
 Works for me.

 Thanks,
 Roman

 On Tue, Feb 25, 2014 at 8:29 PM, Clark Boylan clark.boy...@gmail.com wrote:
 On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi all,

 [1] made it possible for openstack_citest MySQL user to create new
 databases in tests on demand (which is very useful for parallel
 running of tests on MySQL and PostgreSQL, thank you, guys!).

 Unfortunately, openstack_citest user can only create tables in the
 created databases, but not to perform SELECT/UPDATE/INSERT queries.
 Please see the bug [2] filed by Joshua Harlow.

 In PostgreSQL the user who creates a database, becomes the owner of
 the database (and can do everything within this database), and in
 MySQL we have to GRANT those privileges explicitly. But
 openstack_citest doesn't have the permission to do GRANT (even on its
 own databases).

 I think, we could overcome this issue by doing something like this
 while provisioning a node:
 GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
 'openstack_citest'@'localhost';

 and then create databases giving them names starting with the prefix value.

 Is it an acceptable solution? Or am I missing something?

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/69519/
 [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The problem with the prefix approach is it doesn't scale. At some
 point we will decide we need a new prefix then a third and so on
 (which is basically what happened at the schema level). That said we
 recently switched to using single use slaves for all unittesting so I
 think we can safely GRANT ALL on *.* to openstack_citest@localhost and
 call that good enough. This should work fine for upstream testing but
 may not be super friendly to others using the puppet manifests on
 permanent slaves. We can wrap the GRANT in a condition in puppet that
 is set only on single use slaves if this is a problem.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][TripleO] Neutron DB migrations best practice

2014-02-28 Thread Roman Podoliaka
Hi Robert, all,

 But what are we meant to do? Nova etc are dead easy: nova-manage db sync 
 every time the code changes, done.
I believe, it's not different from Nova: run db sync every time the
code changes. It's the only way to guarantee the most recent DB schema
version is used.

Interestingly, that Neutron worked for us in TripleO even without
db-sync. I think it's caused by the fact, the Neutron internally calls
metadata.create_all(), which creates DB schema from SQLAlchemy models
definitions (which is perfectly ok for *new installations* as long as
you 'stamp' the DB schema revision then, but it *does not* work for
upgrades).

Thanks,
Roman

On Wed, Feb 26, 2014 at 2:42 AM, Robert Collins
robe...@robertcollins.net wrote:
 So we had this bug earlier in the week;
 https://bugs.launchpad.net/tripleo/+bug/1283921

Table 'ovs_neutron.ml2_vlan_allocations' doesn't exist in 
 neutron-server.log

 We fixed this by running neutron-db-migrate upgrade head... which we
 figured out by googling and asking weird questions in
 #openstack-neutron.

 But what are we meant to do? Nova etc are dead easy: nova-manage db
 sync every time the code changes, done.

 Neutron seems to do something special and different here, and it's not
 documented from an ops perspective AFAICT - so - please help, cluebats
 needed!

 -Rob


 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-28 Thread Roman Podoliaka
Hi all,

Just a FYI note, not whining :)

Still failing with 'command denied':
http://logs.openstack.org/63/74963/4/check/gate-oslo-incubator-python27/877792b/console.html

Thanks,
Roman

On Fri, Feb 28, 2014 at 1:41 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 Slave images are auto rebuilt daily, so, probably, it's not happens
 yet for all providers.

 Anyway I see the following in nodepool logs:

 2014-02-28 02:24:09,255 INFO
 nodepool.image.build.rax-ord.bare-precise:  [0;36mnotice:
 /Stage[main]/Jenkins::Slave/Mysql::Db[openstack_citest]/Database_grant[openstack_citest@localhost/openstack_citest]/privileges:
 privileges changed '' to 'all' [0m

 On Fri, Feb 28, 2014 at 12:28 PM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Clark, all,

 https://review.openstack.org/#/c/76634/ has been merged, but I still
 get 'command denied' errors [1].

 Is there something else, that must be done before we can use new
 privileges of openstack_citest user?

 Thanks,
 Roman

 [1] 
 http://logs.openstack.org/63/74963/4/check/gate-oslo-incubator-python27/e115a5f/console.html

 On Wed, Feb 26, 2014 at 11:54 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi Clark,

 I think we can safely GRANT ALL on *.* to openstack_citest@localhost and 
 call that good enough
 Works for me.

 Thanks,
 Roman

 On Tue, Feb 25, 2014 at 8:29 PM, Clark Boylan clark.boy...@gmail.com 
 wrote:
 On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi all,

 [1] made it possible for openstack_citest MySQL user to create new
 databases in tests on demand (which is very useful for parallel
 running of tests on MySQL and PostgreSQL, thank you, guys!).

 Unfortunately, openstack_citest user can only create tables in the
 created databases, but not to perform SELECT/UPDATE/INSERT queries.
 Please see the bug [2] filed by Joshua Harlow.

 In PostgreSQL the user who creates a database, becomes the owner of
 the database (and can do everything within this database), and in
 MySQL we have to GRANT those privileges explicitly. But
 openstack_citest doesn't have the permission to do GRANT (even on its
 own databases).

 I think, we could overcome this issue by doing something like this
 while provisioning a node:
 GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
 'openstack_citest'@'localhost';

 and then create databases giving them names starting with the prefix 
 value.

 Is it an acceptable solution? Or am I missing something?

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/69519/
 [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The problem with the prefix approach is it doesn't scale. At some
 point we will decide we need a new prefix then a third and so on
 (which is basically what happened at the schema level). That said we
 recently switched to using single use slaves for all unittesting so I
 think we can safely GRANT ALL on *.* to openstack_citest@localhost and
 call that good enough. This should work fine for upstream testing but
 may not be super friendly to others using the puppet manifests on
 permanent slaves. We can wrap the GRANT in a condition in puppet that
 is set only on single use slaves if this is a problem.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-26 Thread Roman Podoliaka
Hi Clark,

 I think we can safely GRANT ALL on *.* to openstack_citest@localhost and 
 call that good enough
Works for me.

Thanks,
Roman

On Tue, Feb 25, 2014 at 8:29 PM, Clark Boylan clark.boy...@gmail.com wrote:
 On Tue, Feb 25, 2014 at 2:33 AM, Roman Podoliaka
 rpodoly...@mirantis.com wrote:
 Hi all,

 [1] made it possible for openstack_citest MySQL user to create new
 databases in tests on demand (which is very useful for parallel
 running of tests on MySQL and PostgreSQL, thank you, guys!).

 Unfortunately, openstack_citest user can only create tables in the
 created databases, but not to perform SELECT/UPDATE/INSERT queries.
 Please see the bug [2] filed by Joshua Harlow.

 In PostgreSQL the user who creates a database, becomes the owner of
 the database (and can do everything within this database), and in
 MySQL we have to GRANT those privileges explicitly. But
 openstack_citest doesn't have the permission to do GRANT (even on its
 own databases).

 I think, we could overcome this issue by doing something like this
 while provisioning a node:
 GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
 'openstack_citest'@'localhost';

 and then create databases giving them names starting with the prefix value.

 Is it an acceptable solution? Or am I missing something?

 Thanks,
 Roman

 [1] https://review.openstack.org/#/c/69519/
 [2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The problem with the prefix approach is it doesn't scale. At some
 point we will decide we need a new prefix then a third and so on
 (which is basically what happened at the schema level). That said we
 recently switched to using single use slaves for all unittesting so I
 think we can safely GRANT ALL on *.* to openstack_citest@localhost and
 call that good enough. This should work fine for upstream testing but
 may not be super friendly to others using the puppet manifests on
 permanent slaves. We can wrap the GRANT in a condition in puppet that
 is set only on single use slaves if this is a problem.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] openstack_citest MySQL user privileges to create databases on CI nodes

2014-02-25 Thread Roman Podoliaka
Hi all,

[1] made it possible for openstack_citest MySQL user to create new
databases in tests on demand (which is very useful for parallel
running of tests on MySQL and PostgreSQL, thank you, guys!).

Unfortunately, openstack_citest user can only create tables in the
created databases, but not to perform SELECT/UPDATE/INSERT queries.
Please see the bug [2] filed by Joshua Harlow.

In PostgreSQL the user who creates a database, becomes the owner of
the database (and can do everything within this database), and in
MySQL we have to GRANT those privileges explicitly. But
openstack_citest doesn't have the permission to do GRANT (even on its
own databases).

I think, we could overcome this issue by doing something like this
while provisioning a node:
GRANT ALL on `some_predefined_prefix_goes_here\_%`.* to
'openstack_citest'@'localhost';

and then create databases giving them names starting with the prefix value.

Is it an acceptable solution? Or am I missing something?

Thanks,
Roman

[1] https://review.openstack.org/#/c/69519/
[2] https://bugs.launchpad.net/openstack-ci/+bug/1284320

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-20 Thread Roman Podoliaka
Hi all,

I'm ready to help with syncing of DB code. But we'll need reviewers
attention in both oslo-incubator in nova :)

Thanks,
Roman

On Thu, Feb 20, 2014 at 5:37 AM, Lance D Bragstad ldbra...@us.ibm.com wrote:
 Shed a little bit of light on Matt's comment about Keystone removing
 oslo-incubator code and the issues we hit. Comments below.


 Best Regards,

 Lance Bragstad
 ldbra...@us.ibm.com

 Doug Hellmann doug.hellm...@dreamhost.com wrote on 02/19/2014 09:00:29 PM:

 From: Doug Hellmann doug.hellm...@dreamhost.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date: 02/19/2014 09:12 PM
 Subject: Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator
 sync workflow





 On Wed, Feb 19, 2014 at 9:52 PM, Joe Gordon joe.gord...@gmail.com wrote:
 As a side to this, as an exercise I tried a oslo sync in cinder to see
 what kind of issues would arise and here are my findings so far:
 https://review.openstack.org/#/c/74786/

 On Wed, Feb 19, 2014 at 6:20 PM, Matt Riedemann
 mrie...@linux.vnet.ibm.com wrote:
 
 
  On 2/19/2014 7:13 PM, Joe Gordon wrote:
 
  Hi All,
 
  As many of you know most oslo-incubator code is wildly out of sync.
  Assuming we consider it a good idea to sync up oslo-incubator code
  before cutting Icehouse, then we have a problem.
 
  Today oslo-incubator code is synced in ad-hoc manor, resulting in
  duplicated efforts and wildly out of date code. Part of the challenges
  today are backwards incompatible changes and new oslo bugs. I expect
  that once we get a single project to have an up to date oslo-incubator
  copy it will make syncing a second project significantly easier. So
  because I (hopefully) have some karma built up in nova, I would like
  to volunteer nova to be the guinea pig.
 
 
  To fix this I would like to propose starting an oslo-incubator/nova
  sync team. They would be responsible for getting nova's oslo code up
  to date.  I expect this work to involve:
  * Reviewing lots of oslo sync patches
  * Tracking the current sync patches
  * Syncing over the low hanging fruit, modules that work without
  changing
  nova.
  * Reporting bugs to oslo team
  * Working with oslo team to figure out how to deal with backwards
  incompatible changes
 * Update nova code or make oslo module backwards compatible
  * Track all this
  * Create a roadmap for other projects to follow (re: documentation)
 
  I am looking for volunteers to help with this effort, any takers?
 
 
  best,
  Joe Gordon
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  Well I'll get the ball rolling...
 
  In the past when this has come up there is always a debate over should
  be
  just sync to sync because we should always be up to date, or is that
  dangerous and we should only sync when there is a need (which is what
  the
  review guidelines say now [1]).  There are pros and cons:
 
  pros:
 
  - we get bug fixes that we didn't know existed
  - it should be less painful to sync if we do it more often
 
  cons:
 
  - it's more review overhead and some crazy guy thinks we need a special
  team
  dedicated to reviewing those changes :)
  - there are some changes in o-i that would break nova; I'm specifically
  thinking of the oslo RequestContext which has domain support now (or
  some
  other keystone thingy) and nova has it's own RequestContext - so if we
  did
  sync that from o-i it would change nova's logging context and break on
  us
  since we didn't use oslo context.
 
  For that last con, I'd argue that we should move to the oslo
  RequestContext,
  I'm not sure why we aren't.  Would that module then not fall under
  low-hanging-fruit?

 I am classifying low hanging fruit as anything that doesn't require
 any nova changes to work.

 +1
  I think the DB API modules have been a concern for auto-syncing before
  too
  but I can't remember why now...something about possibly changing the
  behavior of how the nova migrations would work?  But if they are already
  using the common code, I don't see the issue.

 AFAIK there is already a team working on db api syncing, so I was
 thinking of let them deal with it.

 +1

 Doug

 
  This is kind of an aside, but I'm kind of confused now about how the
  syncs
  work with things that fall under oslo.rootwrap or oslo.messaging, like
  this
  patch [2].  It doesn't completely match the o-i patch, i.e. it's not
  syncing
  over openstack/common/rootwrap/wrapper.py, and I'm assuming because
  that's
  in oslo.rootwrap now?  But then why does the code still exist in
  oslo-incubator?
 
  I think the keystone guys are running into a similar issue where they
  want
  to remove a bunch of now-dead messaging code from keystone but can't
  because
  there are still some things in oslo-incubator using oslo.messaging code,
  or
  something weird like that. So maybe those 

Re: [openstack-dev] [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-01 Thread Roman Podoliaka
Hi all,

My two cents.

 2) Extend alembic so that op.drop_column() does the right thing
We could, but should we?

The only reason alembic doesn't support these operations for SQLite
yet is that SQLite lacks proper support of ALTER statement. For
sqlalchemy-migrate we've been providing a work-around in the form of
recreating of a table and copying of all existing rows (which is a
hack, really).

But to be able to recreate a table, we first must have its definition.
And we've been relying on SQLAlchemy schema reflection facilities for
that. Unfortunately, this approach has a few drawbacks:

1) SQLAlchemy versions prior to 0.8.4 don't support reflection of
unique constraints, which means the recreated table won't have them;

2) special care must be taken in 'edge' cases (e.g. when you want to
drop a BOOLEAN column, you must also drop the corresponding CHECK (col
in (0, 1)) constraint manually, or SQLite will raise an error when the
table is recreated without the column being dropped)

3) special care must be taken for 'custom' type columns (it's got
better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override
definitions of reflected BIGINT columns manually for each
column.drop() call)

4) schema reflection can't be performed when alembic migrations are
run in 'offline' mode (without connecting to a DB)
...
(probably something else I've forgotten)

So it's totally doable, but, IMO, there is no real benefit in
supporting running of schema migrations for SQLite.

 ...attempts to drop schema generation based on models in favor of migrations

As long as we have a test that checks that the DB schema obtained by
running of migration scripts is equal to the one obtained by calling
metadata.create_all(), it's perfectly OK to use model definitions to
generate the initial DB schema for running of unit-tests as well as
for new installations of OpenStack (and this is actually faster than
running of migration scripts). ... and if we have strong objections
against doing metadata.create_all(), we can always use migration
scripts for both new installations and upgrades for all DB backends,
except SQLite.

Thanks,
Roman

On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
enikano...@mirantis.com wrote:
 Boris,

 Sorry for the offtopic.
 Is switching to model-based schema generation is something decided? I see
 the opposite: attempts to drop schema generation based on models in favor of
 migrations.
 Can you point to some discussion threads?

 Thanks,
 Eugene.



 On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic bpavlo...@mirantis.com
 wrote:

 Jay,

 Yep we shouldn't use migrations for sqlite at all.

 The major issue that we have now is that we are not able to ensure that DB
 schema created by migration  models are same (actually they are not same).

 So before dropping support of migrations for sqlite  switching to model
 based created schema we should add tests that will check that model 
 migrations are synced.
 (we are working on this)



 Best regards,
 Boris Pavlovic


 On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev alaza...@mirantis.com
 wrote:

 Trevor,

 Such check could be useful on alembic side too. Good opportunity for
 contribution.

 Andrew.


 On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay tmc...@redhat.com wrote:

 Okay,  I can accept that migrations shouldn't be supported on sqlite.

 However, if that's the case then we need to fix up savanna-db-manage so
 that it checks the db connection info and throws a polite error to the
 user for attempted migrations on unsupported platforms. For example:

 Database migrations are not supported for sqlite

 Because, as a developer, when I see a sql error trace as the result of
 an operation I assume it's broken :)

 Best,

 Trevor

 On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote:
  On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
   I was playing with alembic migration and discovered that
   op.drop_column() doesn't work with sqlite.  This is because sqlite
   doesn't support dropping a column (broken imho, but that's another
   discussion).  Sqlite throws a syntax error.
  
   To make this work with sqlite, you have to copy the table to a
   temporary
   excluding the column(s) you don't want and delete the old one,
   followed
   by a rename of the new table.
  
   The existing 002 migration uses op.drop_column(), so I'm assuming
   it's
   broken, too (I need to check what the migration test is doing).  I
   was
   working on an 003.
  
   How do we want to handle this?  Three good options I can think of:
  
   1) don't support migrations for sqlite (I think no, but maybe)
  
   2) Extend alembic so that op.drop_column() does the right thing
   (more
   open-source contributions for us, yay :) )
  
   3) Add our own wrapper in savanna so that we have a drop_column()
   method
   that wraps copy/rename.
  
   Ideas, comments?
 
  Migrations should really not be run against SQLite at all -- only on
  the
  databases that would be used in production. I 

Re: [openstack-dev] [OpenStack-Dev] Cherry picking commit from oslo-incubator

2014-01-17 Thread Roman Podoliaka
Hi all,

Huge +1 for periodic syncs for two reasons:
1) it makes syncs smaller and thus easier
2) code in oslo-incubator often contains important bug fixes (e.g.
incorrect usage of eventlet TLS we found in Nova a few months ago)

Thanks,
Roman

On Fri, Jan 17, 2014 at 10:15 AM, Flavio Percoco fla...@redhat.com wrote:
 On 16/01/14 17:32 -0500, Doug Hellmann wrote:

 On Thu, Jan 16, 2014 at 3:19 PM, Ben Nemec openst...@nemebean.com wrote:

On 2014-01-16 13:48, John Griffith wrote:

Hey Everyone,

A review came up today that cherry-picked a specific commit to OSLO
Incubator, without updating the rest of the files in the module.  I
rejected that patch, because my philosophy has been that when you
update/pull from oslo-incubator it should be done as a full sync of
the entire module, not a cherry pick of the bits and pieces that
 you
may or may not be interested in.

As it turns out I've received a bit of push back on this, so it
 seems
maybe I'm being unreasonable, or that I'm mistaken in my
 understanding
of the process here.  To me it seems like a complete and total
 waste
to have an oslo-incubator and common libs if you're going to turn
around and just cherry pick changes, but maybe I'm completely out
 of
line.

Thoughts??


I suppose there might be exceptions, but in general I'm with you.  For
 one
thing, if someone tries to pull out a specific change in the Oslo code,
there's no guarantee that code even works.  Depending on how the sync
 was
done it's possible the code they're syncing never passed the Oslo unit
tests in the form being synced, and since unit tests aren't synced to
 the
target projects it's conceivable that completely broken code could get
through Jenkins.

Obviously it's possible to do a successful partial sync, but for the
 sake
of reviewer sanity I'm -1 on partial syncs without a _very_ good reason
(like it's blocking the gate and there's some reason the full module
 can't
be synced).


 I agree. Cherry picking a single (or even partial) commit really should be
 avoided.

 The update tool does allow syncing just a single module, but that should
 be
 used very VERY carefully, especially because some of the changes we're
 making
 as we work on graduating some more libraries will include cross-dependent
 changes between oslo modules.


 Agrred. Syncing on master should be complete synchornization from Oslo
 incubator. IMHO, the only case where cherry-picking from oslo should
 be allowed is when backporting patches to stable branches. Master
 branches should try to keep up-to-date with Oslo and sync everything
 every time.

 With that in mind, I'd like to request project's members to do
 periodic syncs from Oslo incubator. Yes, it is tedious, painful and
 sometimes requires more than just syncing, but we should all try to
 keep up-to-date with Oslo. The main reason why I'm asking this is
 precisely stable branches. If the project stays way behind the
 oslo-incubator, it'll be really painful to backport patches to stable
 branches in case of failures.

 Unfortunately, there are projects that are quite behind from
 oslo-incubator master.

 One last comment. FWIW, backwards compatibility is always considered
 in all Oslo reviews and if there's a crazy-breaking change, it's
 always notified.

 Thankfully, this all will be alleviated with the libs that are being
 pulled out from the incubator. The syncs will contain fewer modules
 and will be smaller.


 I'm happy you brought this up now. I was meaning to do it.

 Cheers,
 FF


 --
 @flaper87
 Flavio Percoco

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Let's move to Alembic

2014-01-16 Thread Roman Podoliaka
Hi all,

I'm glad you've decided to drop sqlalchemy-migrate support :)

As for porting Ironic to using of alembic migrations, I believe,
Dmitriy Shulyak already uploaded a proof-of-concept patch to Ironic
before, but it abandoned. Adding Dmitriy to this thread, so he is
notified he can restore his patch and continue his work.

Thanks,
Roman

On Thu, Jan 16, 2014 at 5:59 AM, Devananda van der Veen
devananda@gmail.com wrote:
 Hi all,

 Some months back, there was discussion to move Ironic to use Alembic instead
 of SqlAlchemy. At that time, I was much more interested in getting the
 framework together than I was in restructuring our database migrations, and
 what we had was sufficient to get us off the ground.

 Now that the plumbing is coming together, and we're looking hopefully at
 doing a release this cycle, I'd like to see if anyone wants to pick up the
 torch and switch our db migrations to use alembic. Ideally, let's do this
 between the I2 and I3 milestones.

 I am aware of the work adding a transition-to-alembic to Oslo:
 https://review.openstack.org/#/c/59433/

 I feel like we don't necessarily need to wait for that to land. There's a
 lot less history in our migrations than in, say, Nova; we don't yet support
 down-migrations anyway; and there aren't any prior releases of the project
 which folks could upgrade from.

 Thoughts?

 -Deva

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Unit tests, gating, and real databases

2014-01-08 Thread Roman Podoliaka
Hi Ivan,

Indeed, nodepool nodes have MySQL and PostgreSQL installed and
running. There are databases you can access from your tests
(mysql://openstack_citest:openstack_citest@localhost/openstack_citest
and postgresql://openstack_citest:openstack_citest@localhost/openstack_citest).
[1] is a great example how it's actually used for running
backend-specific DB test cases in oslo-incubator.

Besides, openstack_citest user in PostgreSQL is allowed to create/drop
databases, which enables us to implement a slightly different approach
of running DB tests [2]. This might be very useful when you need more
that one DB schema (e.g. to run tests concurrently).

Thanks,
Roman

[1] https://review.openstack.org/#/c/54375/
[2] https://review.openstack.org/#/c/47818/

On Fri, Jan 3, 2014 at 9:17 PM, Ivan Melnikov
imelni...@griddynamics.com wrote:
 Hi there,

 As far as I understand, slaves that run gate-*-python27 and python26
 jobs have MySQL and Postgres servers installed and running so we can
 test migrations and do functional testing for database-related code.
 I wanted to use this to improve TaskFlow gating, but I failed to find
 docs about it and to derive how this database instances should be
 used from nova and oslo.db tests code.

 Can anyone give some hints or pointers on where should I get
 connection config and what can I do with those database servers in
 unit and functional tests?

 --
 WBR,
 Ivan A. Melnikov

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Alembic migrations

2013-12-22 Thread Roman Podoliaka
Hi Gary,

It's a known bug (the migration script creating 'agents' table is
mistakenly not applied when running schema migrations with ML2 core
plugin selected). There is a patch on review
https://review.openstack.org/#/c/61677 fixing this error.

Thanks,
Roman

On Sun, Dec 22, 2013 at 4:02 PM, Gary Kotton gkot...@vmware.com wrote:
 Hi,
 Anyone else encounter the following exception:

 + /usr/local/bin/neutron-db-manage --config-file /etc/neutron/neutron.conf
 --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
 No handlers could be found for logger neutron.common.legacy
 INFO  [alembic.migration] Context impl MySQLImpl.
 INFO  [alembic.migration] Will assume non-transactional DDL.
 INFO  [alembic.migration] Running upgrade None - folsom, folsom initial
 database
 INFO  [alembic.migration] Running upgrade folsom - 2c4af419145b, l3_support
 INFO  [alembic.migration] Running upgrade 2c4af419145b - 5a875d0e5c, ryu
 INFO  [alembic.migration] Running upgrade 5a875d0e5c - 48b6f43f7471, DB
 support for service types
 INFO  [alembic.migration] Running upgrade 48b6f43f7471 - 3cb5d900c5de,
 security_groups
 INFO  [alembic.migration] Running upgrade 3cb5d900c5de - 1d76643bcec4,
 nvp_netbinding
 INFO  [alembic.migration] Running upgrade 1d76643bcec4 - 2a6d0b51f4bb,
 cisco plugin cleanup
 INFO  [alembic.migration] Running upgrade 2a6d0b51f4bb - 1b693c095aa3,
 Quota ext support added in Grizzly
 INFO  [alembic.migration] Running upgrade 1b693c095aa3 - 1149d7de0cfa,
 inital port security
 INFO  [alembic.migration] Running upgrade 1149d7de0cfa - 49332180ca96, ryu
 plugin update
 INFO  [alembic.migration] Running upgrade 49332180ca96 - 38335592a0dc,
 nvp_portmap
 INFO  [alembic.migration] Running upgrade 38335592a0dc - 54c2c487e913, 'DB
 support for load balancing service
 INFO  [alembic.migration] Running upgrade 54c2c487e913 - 45680af419f9,
 nvp_qos
 INFO  [alembic.migration] Running upgrade 45680af419f9 - 1c33fa3cd1a1,
 Support routing table configuration on Router
 INFO  [alembic.migration] Running upgrade 1c33fa3cd1a1 - 363468ac592c,
 nvp_network_gw
 INFO  [alembic.migration] Running upgrade 363468ac592c - 511471cc46b, Add
 agent management extension model support
 INFO  [alembic.migration] Running upgrade 511471cc46b - 3b54bf9e29f7, NEC
 plugin sharednet
 INFO  [alembic.migration] Running upgrade 3b54bf9e29f7 - 4692d074d587,
 agent scheduler
 INFO  [alembic.migration] Running upgrade 4692d074d587 - 1341ed32cc1e,
 nvp_net_binding
 INFO  [alembic.migration] Running upgrade 1341ed32cc1e - grizzly, grizzly
 INFO  [alembic.migration] Running upgrade grizzly - f489cf14a79c, DB
 support for load balancing service (havana)
 INFO  [alembic.migration] Running upgrade f489cf14a79c - 176a85fc7d79, Add
 portbindings db
 INFO  [alembic.migration] Running upgrade 176a85fc7d79 - 32b517556ec9,
 remove TunnelIP model
 INFO  [alembic.migration] Running upgrade 32b517556ec9 - 128e042a2b68,
 ext_gw_mode
 INFO  [alembic.migration] Running upgrade 128e042a2b68 - 5ac71e65402c,
 ml2_initial
 INFO  [alembic.migration] Running upgrade 5ac71e65402c - 3cbf70257c28,
 nvp_mac_learning
 INFO  [alembic.migration] Running upgrade 3cbf70257c28 - 5918cbddab04, add
 tables for router rules support
 INFO  [alembic.migration] Running upgrade 5918cbddab04 - 3cabb850f4a5,
 Table to track port to host associations
 INFO  [alembic.migration] Running upgrade 3cabb850f4a5 - b7a8863760e,
 Remove cisco_vlan_bindings table
 INFO  [alembic.migration] Running upgrade b7a8863760e - 13de305df56e,
 nec_add_pf_name
 INFO  [alembic.migration] Running upgrade 13de305df56e - 20ae61555e95, DB
 Migration for ML2 GRE Type Driver
 INFO  [alembic.migration] Running upgrade 20ae61555e95 - 477a4488d3f4, DB
 Migration for ML2 VXLAN Type Driver
 INFO  [alembic.migration] Running upgrade 477a4488d3f4 - 2032abe8edac,
 LBaaS add status description
 INFO  [alembic.migration] Running upgrade 2032abe8edac - 52c5e4a18807,
 LBaaS Pool scheduler
 INFO  [alembic.migration] Running upgrade 52c5e4a18807 - 557edfc53098, New
 service types framework (service providers)
 INFO  [alembic.migration] Running upgrade 557edfc53098 - e6b16a30d97, Add
 cisco_provider_networks table
 INFO  [alembic.migration] Running upgrade e6b16a30d97 - 39cf3f799352, FWaaS
 Havana-2 model
 INFO  [alembic.migration] Running upgrade 39cf3f799352 - 52ff27f7567a,
 Support for VPNaaS
 INFO  [alembic.migration] Running upgrade 52ff27f7567a - 11c6e18605c8, Pool
 Monitor status field
 INFO  [alembic.migration] Running upgrade 11c6e18605c8 - 35c7c198ddea,
 remove status from HealthMonitor
 INFO  [alembic.migration] Running upgrade 35c7c198ddea - 263772d65691,
 Cisco plugin db cleanup part II
 INFO  [alembic.migration] Running upgrade 263772d65691 - c88b6b5fea3, Cisco
 N1KV tables
 INFO  [alembic.migration] Running upgrade c88b6b5fea3 - f9263d6df56,
 remove_dhcp_lease
 INFO  [alembic.migration] Running upgrade f9263d6df56 - 569e98a8132b,
 metering
 INFO  [alembic.migration] Running upgrade 569e98a8132b - 

Re: [openstack-dev] [TripleO] Gerrit review refs now supported by diskimage-builder's source-repositories element

2013-12-20 Thread Roman Podoliaka
Hi Chris,

This is super useful for testing patches on review! Thank you!

Roman

On Fri, Dec 20, 2013 at 7:35 PM, Chris Jones c...@tenshu.net wrote:
 Hi

 As of just now (review 63021) the source-repositories element in
 diskimage-builder can fetch git repos from gerrit reviews.

 I figured it'd be worth mentioning here because it's super useful if you
 want to test the code from one or more gerrit reviews, in a TripleO
 environment.

 A quick example, let's say you're using our devtest.sh script to build your
 local environment and you want to try out patch set 9 of Yuiko Takada's
 latest nova bug fix, all you need to do is:

 export DIB_REPOLOCATION_nova=https://review.openstack.org/openstack/nova
 export DIB_REPOREF_nova=refs/changes/56/53056/9
 ./scripts/devtest.sh

 Bam!

 (FWIW, the same env vars work if you're calling disk-image-create directly)

 --
 Cheers,

 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Olso][DB] Remove eventlet from oslo.db

2013-12-03 Thread Roman Podoliaka
Hey all, Mark,

Yes, that's exactly what we were going to do! (and there is a similar
class in eventlet itself, Victor is currently trying to run Nova with
it applied, so we might end up with no module/class at all, but rather
use the one from eventlet instead)

Thank you all for your feedback!

Roman

On Wed, Dec 4, 2013 at 1:45 AM, Mark McLoughlin mar...@redhat.com wrote:
 On Mon, 2013-12-02 at 16:02 +0200, Victor Sergeyev wrote:
 Hi folks!

 At the moment I and Roman Podoliaka are working on splitting of
 openstack.common.db code into a separate library. And it would be nice to
 drop dependency on eventlet before oslo.db is released.

 Currently, there is only one place in oslo.db where we use eventlet -
 wrapping of DB API method calls to be executed by tpool threads. It wraps
 DB API calls to be executed by tpool threads. This is only needed when
 eventlet is used together with DB-API driver implemented as a Python C
 extension (eventlet can't monkey patch C code, so we end up with DB API
 calls blocking all green threads when using Python-MySQLdb). eventlet has a
 workaround known as 'tpool' which is basically a pool of real OS threads
 that can play nicely with eventlet event loop. tpool feature is
 experimental and known to have stability problems. There is a doubt that
 anyone is using it in production at all. Nova API (and probably other API
 services) has an option to prefork the process on start, so that they don't
 need to use tpool when using eventlet together Python-MySQLdb.

 We'd really like to drop tpool support from oslo.db, because as a library
 we should not be bound to any particular concurrency model. If a target
 project is using eventlet, we believe, it is its problem how to make it
 play nicely with Python-MySQLdb lib, but not the problem of oslo.db.
 Though, we could put tpool wrapper into another helper module within
 oslo-incubator.

 But we would really-really like not to have any eventlet related code in
 oslo.db.

 Are you using CONF.database.use_tpool in production? Does the approach with
 a separate tpool wrapper class seem reasonable? Or we can just drop tpool
 support at all, if no one is using it?

 Another approach is to put the tpool wrapper class in a separate module
 which would be completely optional for users of the library.

 For example, you could imagine people who don't want this doing:

   from oslo import db

   dbapi = db.DBAPI()

 but if you want the tpool thing, you might do:

   from oslo import db
   from oslo.db import eventlet as db_eventlet

   dbapi = db_eventlet.TpoolWrapper(db.DBAPI())

 (I'm just making stuff up, but you get the idea)

 The key thing is that eventlet isn't a hard dependency of the library,
 but the useful eventlet integration is still available in the library if
 you want it.

 We did something similar in oslo.messaging, and the issues there were
 probably more difficult to deal with.

 Mark.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Top Gate Bugs

2013-11-20 Thread Roman Podoliaka
Hey all,

I think I found a serious bug in our usage of eventlet thread local
storage. Please check out this snippet [1].

This is how we use eventlet TLS in Nova and common Oslo code [2]. This
could explain how [3] actually breaks TripleO devtest story and our
gates.

Am I right? Or I am missing something and should get some sleep? :)

Thanks,
Roman

[1] http://paste.openstack.org/show/53686/
[2] 
https://github.com/openstack/nova/blob/master/nova/openstack/common/local.py#L48
[3] 
https://github.com/openstack/nova/commit/85332012dede96fa6729026c2a90594ea0502ac5

On Wed, Nov 20, 2013 at 5:55 PM, Derek Higgins der...@redhat.com wrote:
 On 20/11/13 14:21, Anita Kuno wrote:
 Thanks for posting this, Joe. It really helps to create focus so we can
 address these bugs.

 We are chatting in #openstack-neutron about 1251784, 1249065, and 1251448.

 We are looking for someone to work on 1251784 - I had mentioned it at
 Monday's Neutron team meeting and am trying to shop it around in
 -neutron now. We need someone other than Salvatore, Aaron or Maru to
 work on this since they each have at least one very important bug they
 are working on. Please join us in #openstack-neutron and lend a hand -
 all of OpenStack needs your help.

 I've been hitting this in tripleo intermittently for the last few days
 (or it at least looks to be the same bug), this morning while trying to
 debug the problem I noticed http request/responses happening out of
 order. I've added details to the bug.

 https://bugs.launchpad.net/tripleo/+bug/1251784


 Bug 1249065 is assigned to Aaron Rosen, who isn't in the channel at the
 moment, so I don't have an update on his progress or any blockers he is
 facing. Hopefully (if you are reading this Aaron) he will join us in
 channel soon and I had hear from him about his status.

 Bug 1251448 is assigned to Maru Newby, who I am talking with now in
 -neutron. He is addressing the bug. I will share what information I have
 regarding this one when I have some.

 We are all looking forward to a more stable gate and this information
 really helps.

 Thanks again, Joe,
 Anita.

 On 11/20/2013 01:09 AM, Joe Gordon wrote:
 Hi All,

 As many of you have noticed the gate has been in very bad shape over the
 past few days.  Here is a list of some of the top open bugs (without
 pending patches, and many recent hits) that we are hitting.  Gate won't be
 stable, and it will be hard to get your code merged, until we fix these
 bugs.

 1) https://bugs.launchpad.net/bugs/1251920
  nova
 468 Hits
 2) https://bugs.launchpad.net/bugs/1251784
  neutron, Nova
  328 Hits
 3) https://bugs.launchpad.net/bugs/1249065
  neutron
   122 hits
 4) https://bugs.launchpad.net/bugs/1251448
  neutron
 65 Hits

 Raw Data:


 Note: If a bug has any hits for anything besides failure, it means the
 fingerprint isn't perfect.

 Elastic recheck known issues
  Bug: https://bugs.launchpad.net/bugs/1251920 = message:assertionerror:
 console output was empty AND filename:console.html Title: Tempest
 failures due to failure to return console logs from an instance Project:
 Status nova: Confirmed Hits FAILURE: 468 Bug:
 https://bugs.launchpad.net/bugs/1251784 = message:Connection to neutron
 failed: Maximum attempts reached AND filename:logs/screen-n-cpu.txt
 Title: nova+neutron scheduling error: Connection to neutron failed: Maximum
 attempts reached Project: Status neutron: New nova: New Hits FAILURE: 328
 UNSTABLE: 13 SUCCESS: 275 Bug: https://bugs.launchpad.net/bugs/1240256 =
 message: 503 AND filename:logs/syslog.txt AND
 syslog_program:proxy-server Title: swift proxy-server returning 503
 during tempest run Project: Status openstack-ci: Incomplete swift: New
 tempest: New Hits FAILURE: 136 SUCCESS: 83
 Pending Patch Bug: https://bugs.launchpad.net/bugs/1249065 = message:No
 nw_info cache associated with instance AND
 filename:logs/screen-n-api.txt Title: Tempest failure:
 tempest/scenario/test_snapshot_pattern.py Project: Status neutron: New
 nova: Confirmed Hits FAILURE: 122 Bug:
 https://bugs.launchpad.net/bugs/1252514 = message:Got error from Swift:
 put_object AND filename:logs/screen-g-api.txt Title: glance doesn't
 recover if Swift returns an error Project: Status devstack: New glance: New
 swift: New Hits FAILURE: 95
 Pending Patch Bug: https://bugs.launchpad.net/bugs/1244255 =
 message:NovaException: Unexpected vif_type=binding_failed AND
 filename:logs/screen-n-cpu.txt Title: binding_failed because of l2 agent
 assumed down Project: Status neutron: Fix Committed Hits FAILURE: 92
 SUCCESS: 29 Bug: https://bugs.launchpad.net/bugs/1251448 = message:
 possible networks found, use a Network ID to be more specific. (HTTP 400)
 AND filename:console.html Title: BadRequest: Multiple possible networks
 found, use a Network ID to be more specific. Project: Status neutron: New
 Hits FAILURE: 65 Bug: https://bugs.launchpad.net/bugs/1239856 =
 message:tempest/services AND message:/images_client.py AND
 message:wait_for_image_status AND 

Re: [openstack-dev] sqlalchemy-migrate needs a new release

2013-11-15 Thread Roman Podoliaka
Hey,

Awesome! Thank you, guys!

I'm going to give it a try early next week (we basically need to
run/fix tests for every project that uses sqlalchemy-migrate before we
can bump the version in global-requirements, but at least we can start
doing this now).

Roman

On Fri, Nov 15, 2013 at 5:41 PM, David Ripton drip...@redhat.com wrote:
 On 11/14/2013 03:43 PM, David Ripton wrote:

 On 11/11/2013 03:35 PM, David Ripton wrote:

 I'll volunteer to do this release.  I'll wait 24 hours from the
 timestamp of this email for input first.  So, if anyone has opinions
 about the timing of this release, please speak up.

 (In particular, I'd like to do a release *before* Matt Riedermann's DB2
 support patch https://review.openstack.org/#/c/55572/ lands, just in
 case it breaks anything.  Of course we could do another release shortly
 after it gets in, to make folks who use DB2 happy.)


 Update:

 There's now a 0.8 tag in Git but that release failed to reach PyPI, so
 please ignore it.

 Thanks fungi and mordred for helping debug what went wrong.

 https://review.openstack.org/#/c/56449/ (a one-liner) should fix the
 problem.  Once it gets approved, I will attempt to push 0.8.1.


 Update 2:

 sqlalchemy-migrate-0.8.1 is now up on PyPI.  Thanks fungi for kicking PyPI
 for me.


 --
 David Ripton   Red Hat   drip...@redhat.com

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo] Using of oslo.config options in openstack.common modules

2013-11-12 Thread Roman Podoliaka
Hi all,

Currently, many modules from openstack.common package register
oslo.config options. And this is completely OK while these modules are
copied to target projects using update.py script.

But consider the situation, when we decide to split a new library from
oslo-incubator - oslo.spam - and this library uses module
openstack.common.eggs, just because we don't want to reinvent the
wheel and this module is really useful. Lets say module eggs defines
config option 'foo' and this module is also used in Nova. Now we want
to use oslo.spam in Nova too.

So here is the tricky part: if the versions of openstack.common.eggs
in oslo.spam and openstack.common.eggs in Nova define config option
'foo' differently (e.g. the version in Nova is outdated and doesn't
provide the help string), oslo.config will raise DuplicateOptError.

There are at least two ways to solve this problem:
1) don't use openstack.common code in olso.* libraries
2) don't register config options in openstack.common modules

The former is totally doable, but it means that we will end up
repeating ourselves, because we already have a set of very useful
modules (e.g. lockutils) and there is little sense in rewriting them
from scratch within oslo.* libraries.

The latter means that we should refactor the existing code in
openstack.common package. As these modules are meant to be libraries,
it's strange that they rely on config values to control their behavior
instead of using the traditional approach of passing
function/method/class constructor arguments.

...or I might be missing something :)

Thoughts?

Thanks,
Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Releases of this week

2013-11-11 Thread Roman Podoliaka
Hey all,

I've closed all the bugs we've released fixes for.

I've also created a wiki page [1] describing the process of making new
releases. Feel free to update and use it.

Roman

[1] https://wiki.openstack.org/wiki/TripleO/ReleaseManagement

On Wed, Nov 6, 2013 at 11:02 AM, Roman Podoliaka
rpodoly...@mirantis.com wrote:
 Hey,

 Cool! Thanks for sharing this!

 Roman


 On Wednesday, November 6, 2013, Sergey Lukjanov wrote:

 Here is the script for processing bug while releasing -
 https://github.com/ttx/openstack-releasing/blob/master/process_bugs.py

 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

 On Nov 6, 2013, at 1:42 PM, Roman Podoliaka rpodoly...@mirantis.com
 wrote:

 Hey,

 Oh, that's a pity. I didn't know that. Sure I'll update the doc and look
 for a way to automize the process.

 Roman

 On Wednesday, November 6, 2013, Robert Collins wrote:

 Awesome work - thank you!!!

 Can you please add to your docs though, that we need to go and close
 the bugs in the project (either via a script or by hand) - gerrit
 leaves them as Fix Committed today.

 Cheers,
 Rob

 On 2 November 2013 04:38, Roman Podoliaka rpodoly...@mirantis.com wrote:
  Hi all,
 
  This week I've been doing releases of all projects, which belong to
  TripleO program. Here are release notes you might be interested in:
 
  os-collect-config  - 0.1.5 (was 0.1.4):
  - default polling interval was reduced to 30 seconds
  - requirements were updated to use the new iso8601 version
  fixing important bugs
 
  diskimage-builder - 0.0.9 (was 0.0.8)
   - added support for bad Fedora image mirrors (retry the
  request once on 404)
   - removed dependency on dracut-network from fedora element
   - fixed the bug with removing of lost+found dir if it's not
  found
 
  tripleo-image-elements  - 0.1.0 (was 0.0.4)
   - switched to tftpd-hpa on Fedora and Ubuntu
   - made it possible to disable file injection in Nova
   - switched seed vm to Neutron native PXE
   - added Fedora support to apache2 element
   - fixed processing of routes in init-neutron-ovs
   - fixed Heat watch server url key name in seed vm metadata
 
  tripleo-heat-templates - 0.1.0 (was 0.0.1)
   - disabled Nova Baremetal file injection (undercloud)
   - made LaunchConfiguration resources mergeable
   - made neutron public interface configurable (overcloud)
   - made it possible to set public interface IP (overcloud)
   - allowed making the public interface a VLAN (overcloud)
   - added a wait condition for signalling that overcloud is ready
   - added metadata for Nova floating-ip extension
   - added tuskar API service configuration
   - hid AdminToken in Heat templates
   - added Ironic service configuration
 
   tuskar - 0.0.2 (was 0.0.1)
   - made it possible to pass Glance image id
   - fixed the bug with duplicated Resource Class names
 
   tuskar-ui - 0.0.2 (was 0.0.1)
- resource class creation form no longer ignores the image
  selection
- separated flavors creation step
- fail gracefully on node detail page when no overcloud
- added validation of MAC addresses and CIDR values
- stopped appending Resource Class name to Resource Class
  flavors
- fixed JS warnings when $ is not available
- fixed links and naming in Readme
- various code and test fixes (pep8, refactoring)
 
python-tuskarclient - 0.0.2 (was 0.0.1)
- fixed processing of 301 response code
 
os-apply-config and os-refresh-config haven't had new commits
  since the last release
 
  This also means that:
  1. We are now releasing all the projects we have.
  2. *tuskar* projects have got PyPi entries.
 
  Last but not least.
 
  I'd like to say a big thank you to Chris Jones who taught me 'Release
  Management 101' and provided patches to openstack/infra-config to make
  all our projects 'releasable'; Robert Collins for his advice on
  version numbering; Clark Boylan and Jeremy Stanley for landing of
  Gerrit ACL patches and debugging PyPi uploads issues; Radomir
  Dopieralski and Tomas Sedovic for landing a quick fix to tuskar-ui.
 
  Thank you all guys, you've helped me a lot!
 
  Roman
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sqlalchemy-migrate needs a new release

2013-11-11 Thread Roman Podoliaka
Hey all,

As you may know, in our global requirements list [1] we are currently
depending on SQLAlchemy 0.7.x versions (which is 'old stable' branch
and will be deprecated soon). This is mostly due to the fact, that the
latest release of sqlalchemy-migrate from PyPi doesn't support
SQLAlchemy 0.8.x+.

At the same time, distros have been providing patches for fixing this
incompatibility for a long time now. Moreover, those patches have been
merged to sqlalchemy-migrate master too.

As we are now maintaining sqlalchemy-migrate, we could make a new
release of it. This would allow us to bump the version of SQLAlchemy
release we are depending on (as soon as we fix all the bugs we have)
and let distros maintainers stop carrying their own patches.

This has been discussed at the design summit [2], so we just basically
need a volunteer from [3] Gerrit ACL group to make a new release.

Is sqlalchemy-migrate stable enough to make a new release? I think,
yes. The commits we've merged since we adopted this library, only fix
a few issues with SQLAlchemy 0.8.x compatibility and enable running of
tests (we are currently testing all new changes on py26/py27,
SQLAlchemy 0.7.x/0.8.x, SQLite/MySQL/PostgreSQL).

Who wants to help? :)

Thanks,
Roman

[1] 
https://github.com/openstack/requirements/blob/master/global-requirements.txt
[2] https://etherpad.openstack.org/p/icehouse-oslo-db-migrations
[3] https://review.openstack.org/#/admin/groups/186,members

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >