Re: [openstack-dev] Hyper-V Nova CI Infrastructure

2014-02-01 Thread Michael Still
I saw another case of the "build succeeded" message for a failure just
now... https://review.openstack.org/#/c/59101/ has a rebase failure
but was marked as successful.

Is this another case of hyper-v not being voting and therefore being a
bit confusing? The text of the comment clearly indicates this is a
failure at least.

Thanks,
Michael

On Tue, Jan 28, 2014 at 12:17 AM, Alessandro Pilotti
 wrote:
> On 25 Jan 2014, at 16:51 , Matt Riedemann  wrote:
>
>>
>>
>> On 1/24/2014 3:41 PM, Peter Pouliot wrote:
>>> Hello OpenStack Community,
>>>
>>> I am excited at this opportunity to make the community aware that the
>>> Hyper-V CI infrastructure
>>>
>>> is now up and running.  Let's first start with some housekeeping
>>> details.  Our Tempest logs are
>>>
>>> publically available here: http://64.119.130.115. You will see them show
>>> up in any
>>>
>>> Nova Gerrit commit from this moment on.
>>> 
>>
>> So now some questions. :)
>>
>> I saw this failed on one of my nova patches [1].  It says the build 
>> succeeded but that the tests failed.  I talked with Alessandro about this 
>> yesterday and he said that's working as designed, something with how the 
>> scoring works with zuul?
>
> I spoke with clarkb on infra, since we were also very puzzled by this 
> behaviour. I've been told that when the job is non voting, it's always 
> reported as succeeded, which makes sense, although sligltly misleading.
> The message in the Gerrit comment is clearly stating: "Test run failed in ..m 
> ..s (non-voting)", so this should be fair enough. It'd be great to have a way 
> to get rid of the "Build succeded" message above.
>
>> The problem I'm having is figuring out why it failed.  I looked at the 
>> compute logs but didn't find any errors.  Can someone help me figure out 
>> what went wrong here?
>>
>
> The reason for the failure of this job can be found here:
>
> http://64.119.130.115/69047/1/devstack_logs/screen-n-api.log.gz
>
> Please search for "(1054, "Unknown column 'instances.locked_by' in 'field 
> list'")"
>
> In this case the job failed when "nova service-list" got called to verify 
> wether the compute nodes have been properly added to the devstack instance in 
> the overcloud.
>
> During the weekend we added also a console.log to help in simplifying 
> debugging, especially in the rare cases in which the job fails before getting 
> to run tempest:
>
> http://64.119.130.115/69047/1/console.log.gz
>
>
> Let me know if this helps in tracking down your issue!
>
> Alessandro
>
>
>> [1] https://review.openstack.org/#/c/69047/1
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Migration to keystone v3 API questions

2014-02-01 Thread Dolph Mathews
On Sat, Feb 1, 2014 at 12:33 PM, Anne Gentle  wrote:

>
>
>
> On Thu, Jan 23, 2014 at 5:21 AM, Steven Hardy  wrote:
>
>> Hi all,
>>
>> I've recently been working on migrating the heat internal interfaces to
>> use
>> the keystone v3 API exclusively[1].
>>
>> This work has mostly been going well, but I've hit a couple of issues
>> which
>> I wanted to discuss, so we agree the most appropriate workarounds:
>>
>> 1. keystoneclient v3 functionality not accessible when catalog contains a
>> v2 endppoint:
>>
>> In my test environment my keystone endpoint looks like:
>>
>> http://127.0.0.1:5000/v2.0
>>
>> And I'd guess this is similar to the majority of real deployments atm?
>>
>>
> Yes, I was just researching this for the Ops Guide O'Reilly edition, and
> don't see evidence of deployments doing otherwise in their endpoint
> definition.
>
> Also I didn't uncover many (any?) deployments going from Identity v2 to v3
> yet. Meaning, if they're already running v2, when they upgrade to havana,
> they do not move to Identity v3.
>
>
>
>> So when creating a keystoneclient object I've been doing:
>>
>> from keystoneclient.v3 import client as kc_v3
>> v3_endpoint = self.context.auth_url.replace('v2.0', 'v3')
>> client = kc_v3.Client(auth_url=v3_endpoint, ...
>>
>> Which, assuming the keystone service has both v2 and v3 API's enabled
>> works, but any attempt to use v3 functionality fails with 404 because
>> keystoneclient falls back to using the v2.0 endpoint from the catalog.
>>
>> So to work around this I do this:
>>
>> client = kc_v3.Client(auth_url=v3_endpoint, endpoint=v3_endpoint, ...
>> client.authenticate()
>>
>> Which results in the v3 features working OK.
>>
>> So my questions are:
>> - Is this a reasonable workaround for production environments?
>> - What is the roadmap for moving keystone endpoints to be version
>> agnostic?
>> - Is there work ongoing to make the client smarter in terms of figuring
>> out
>>   what URL to use (version negotiation or substituting the appropriate
>> path
>>   when we are in an environment with a legacy v2.0 endpoint..)
>>
>>
> I'd like to understand the ramifications of
> https://review.openstack.org/#/c/62801/ so I have a few questions:
>
> - If keystone service catalog endpoints become version agnostic, does that
> make other project's support of multiple versions of the Identity API
> easier?
>

Yes, because they can discover the versioned API endpoint they need at
runtime (which may differ from that required by another identity client),
rather than requiring additional external configuration (or adding further
bloat to the service catalog; every service that's overloading the service
type with versioning is doing it *terribly* wrong).


> - If the client gets smarter, does that automatically let Heat support
> Identity v2? Or is more work required in Heat after your blueprint at [1]
> is complete?
>
> I saw a brief discussion at project meeting Jan 14 [3] but I didn't see
> any questioning of whether it's premature to preclude the use of Identity
> v2 in any integrated project.
>
> Can we discuss implications and considerations at the project meeting next
> week?
>

Sure!


> Thanks,
> Anne
>
> [3]
> http://eavesdrop.openstack.org/meetings/project/2014/project.2014-01-14-21.02.log.html
>
>
>> 2. Client (CLI) support for v3 API
>>
>> What is the status re porting keystoneclient to provide access to the v3
>> functionality on the CLI?
>>
>> In particular, Heat is moving towards using domains to encapsulate the
>> in-instance users it creates[2], so administrators will require some way
>> to
>> manage users in a non-default domain, e.g to get visibility of what Heat
>> is
>> doing in that domain and debug in the event of any issues.
>>
>> If anyone can provide any BP links or insight that would be much
>> appreciated!
>>
>> Thanks,
>>
>> Steve
>>
>> [1] https://blueprints.launchpad.net/heat/+spec/keystone-v3-only
>> [2] https://wiki.openstack.org/wiki/Heat/Blueprints/InstanceUsers
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-developer] About installing CLI for swift on windows

2014-02-01 Thread Mayur Patil
Hi All,

   I am trying to install Client Library for OpenStack Object Storage API
i.e. python-swiftclient.

I have tried for each method but all fails:

1)   pip install python-swiftclient
2)   pip install 
3)   easy_install python-swiftclient

I also configured manually setup.cfg in python-swiftclient as follows:

http://fpaste.org/73621/

in which, I have removed scripts= bin/swift.

It seems to be all Ok from configurations. http://fpaste.org/73625/

It also setup swift.exe to C:\Python27\Scripts.

But again if I will check for  swift --version, it gives error
http://fpaste.org/73627/

I have also googled but it did not help; stuck at this!

Seeking for guidance,

Thanks !!
*--*
*Cheers,*
*Mayur.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] Private or Public network?

2014-02-01 Thread Shixiong Shang
Hi, Anthony:

Thanks a lot for the quick response! I didn't think about the provider network 
scenarios. I feel grateful you brought it up. I will add provider network to 
the chart.

Here is my understanding:

Private network: VM is attached to a subnet with NO default gateway at all, 
i.e. completely isolated
Provider network:  VM is attached to a physical network with a physical router 
acting as gateway, which is outside of OpenStack’s control

From implementation perspective, both cases are identical since Openstack won’t 
see the gateway port on neutron router. Hence, Openstack should not be 
responsible to send IPv6 RA. Being said, the code I am developing will perform 
a check:

1) If an IPv6 subnet does NOT have gateway port on neutron router (i.e. either 
private or provider network), then only the first two highlighted combinations 
are considered as valid. Because the rest five options requires RA announcement.
2) If an IPv6 subnet does have gateway port on neutron router (i.e public 
network), then only the last five highlighted combinations are considered as 
valid. Because the first two options turn off RA announcement, which makes 
existing gateway port on neutron router useless.

Please keep me honest here…….

Thanks again!

Shixiong




On Feb 1, 2014, at 7:16 PM, Veiga, Anthony  
wrote:

> See Inline
> 
>> Hi, guys:
>> 
>> While I am implementing the code to support IPv6 two mode keywords, a 
>> question came to my mind and I would like to see your opinions.
>> 
>> If you look at the table below, you will notice that the first two 
>> combinations highlighted with red underline have “ipv6_ra_mode” set to OFF. 
>> I think these two options only make sense if the tenant subnet is PRIVATE, 
>> i.e. the subnet is not attached to any router. In this case, OpenStack 
>> should NOT send RA; On the flip side, if the subset is PUBLIC, i.e. the 
>> subnet is attached to a router, then the corresopnding port on the router 
>> should be THE default gateway for the tenant subnet, hence, need to handle 
>> RA announcement.
> 
> 
> These options also make sense if you consider the first column of your chart. 
>  In both of these cases, they are listed as having an external router.  This 
> is REQUIRED for a provider network where the routed is not owned by 
> OpenStack.  Please do NOT consider these private-only.
> 
>> 
>> In summary, I believe it doesn’t make sense to allow OpenStack to create 
>> default gateway for a tenant network, but suppress RA from the default 
>> gateway port on Neutron router. If so, the default gateway port is pretty 
>> much useless. This is the way I am coding now. However, I might overlook 
>> some scenarios. Please chime in if you see any use cases beyond what this 
>> table covers.
> 
> 
> If my upstream router is on-link, then I need to set it as the gateway (for 
> security purposes, we need to be able to filter RAs from rogue agents).  
> However, I still want OpenStack to handle address assignment.
> 
>> 
>> Thanks!
>> 
>> Shixiong
>> 
>> P.S. The PDF file of this table is uploaded to my Dropbox. Here is the link: 
>> https://www.dropbox.com/s/9bojvv9vywsz8sd/IPv6%20Two%20Modes%20v3.0.pdf
>> 
>> 
>> 
>> 
>> 
>> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Question about Nova BP.

2014-02-01 Thread Russell Bryant
On 02/01/2014 08:44 AM, wingwj wrote:
> It's very kind of you.
> Ok, we'll encourage the company to build the CI environment rapidly.
> 
> Thanks for all your recommendation for me. I'll feed it back to my team. 

Please don't miss the libvirt feedback, as well.  You can build a great
CI system, but if we don't see a compelling reason why this couldn't
have been added to libvirt instead of a new driver for OpenStack, we're
unlikely to accept it.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Functional testing, dependencies, etc

2014-02-01 Thread Devananda van der Veen
Hi all,

I've got a few updates to share on the status of functional testing of
Ironic.

Firstly, early last week, Ironic's tempest tests were added to our check
and gate pipeline, and non-voting checks were added to devstack and tempest
pipelines as well. These tests start the Ironic services in devstack, and
then exercise CRUD actions via our python client. The current tempest tests
exercise our integration with Keystone, and the integration of our internal
components (ir-api, ir-cond, mysql/pgsql, rabbit).

Since the project plans include integration with Glance, Keystone, Nova,
and Neutron, we initially enabled all of those in our devstack-gate
environment. However, due to the unpredictable nature of Neutron's test
suite, our gate was blocked as soon as it was enabled, and on Tuesday I
disabled Neutron in our devstack-gate runs.

This is not ideal. Ironic's PXE deployment driver depends on Neutron to
control the DHCP BOOT [4] option for nodes, so to do automated functional
testing of a PXE deployment, we will need to re-enable Neutron in
devstack-gate. We still have work to do before we are ready for end-to-end
deploy testing, so I'm hoping Neutron becomes a bit more stable by then.
I'm not thrilled about the prospects if it is not.

Our Nova driver [1] hasn't landed yet, and probably needs further
refinement before the Nova folks will be ready to land it, but it *is*
functional. Late in the week, Lucas and Chris each did an end-to-end
deployment with it!

So, today, we're not functionally testing Nova with an "ironic" virt driver
[2] -- even though Nova is enabled and tested by devstack-gate in Ironic's
pipeline. This was an oversight in my review of our devstack-gate tests:
we're currently gating on Nova using the libvirt driver. It's unrelated to
Ironic and I don't believe it should be exercised in Ironic's test suite.
Furthermore, we tripped a bug in the libvirt driver by doing file injection
with libguestfs. This has, once again, broken Ironic's gate.

I've proposed a temporary solution [3] that will cause libvirt to be tested
using configdrive in our pipe, as it is in all other projects except
Neutron. A better solution will be to not gate Ironic on the libvirt driver
at all.

The path forward that I see is:
- changes to land in devstack and tempest to create a suitable environment
for functional testing (eg, creating VMs and enrolling them with Ironic),
- the Nova "ironic" driver to be landed, with adequate unit tests, but no
integration tests,
- we set up an experimental devstack-gate pipe to load that Nova driver and
do integration tests between Ironic and Nova, Glance, and Neutron,
- iteratively fix bugs in devstack, ironic, our nova driver, and if
necessary, neutron, until this can become part of our gate.

In the meantime, I don't see a point in those services being enabled and
tested in our check or gate pipelines.


Regards,
Devananda


[1] https://review.openstack.org/5132
[2] https://review.openstack.org/70348
[3] https://review.openstack.org/70544
[4]
http://docs.openstack.org/api/openstack-network/2.0/content/extra-dhc-opt-ext-update.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-02-01 Thread Łukasz Jernaś
On Mon, Jan 27, 2014 at 7:44 PM, Joshua Harlow  wrote:
> +1 I've never understood this either personally.
>
> From what I know most all (correct me if I am wrong) open source projects
> don't translate log messages; so it seems odd to be the special snowflake
> project/s.
>
> Do people find this type of translation useful?

Argh, that dreaded topic again, so let me drop my 2 euro cents here.

As for the open source/free software projects you're right -
personally, I've never seen any such project translating log messages.
But it seems in the commercial world it's more common, as I've seen
some user applications break on translated error messages in a certain
database system, but in such cases those systems usually had
additional codes (eg, ERR1234) with these messages so support didn't
have to really care about what language they were in.
The problem from my point of view is that OpenStack doesn't provide
these codes (it would be a nightmare to require each developer to
register their log message first), so it seems that certain (whose
names I don't really know) support staff rely on the translated
strings and even worse some of these strings are presented to the user
in the form of exceptions (as seen in this thread). Not to mention the
ability to break log watching tools with updated translations, making
people's monitoring break if they didn't launch the daemons with
LC_ALL=C set...


Best regards,
-- 
Łukasz [DeeJay1] Jernaś

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Migration to keystone v3 API questions

2014-02-01 Thread Anne Gentle
On Thu, Jan 23, 2014 at 5:21 AM, Steven Hardy  wrote:

> Hi all,
>
> I've recently been working on migrating the heat internal interfaces to use
> the keystone v3 API exclusively[1].
>
> This work has mostly been going well, but I've hit a couple of issues which
> I wanted to discuss, so we agree the most appropriate workarounds:
>
> 1. keystoneclient v3 functionality not accessible when catalog contains a
> v2 endppoint:
>
> In my test environment my keystone endpoint looks like:
>
> http://127.0.0.1:5000/v2.0
>
> And I'd guess this is similar to the majority of real deployments atm?
>
>
Yes, I was just researching this for the Ops Guide O'Reilly edition, and
don't see evidence of deployments doing otherwise in their endpoint
definition.

Also I didn't uncover many (any?) deployments going from Identity v2 to v3
yet. Meaning, if they're already running v2, when they upgrade to havana,
they do not move to Identity v3.



> So when creating a keystoneclient object I've been doing:
>
> from keystoneclient.v3 import client as kc_v3
> v3_endpoint = self.context.auth_url.replace('v2.0', 'v3')
> client = kc_v3.Client(auth_url=v3_endpoint, ...
>
> Which, assuming the keystone service has both v2 and v3 API's enabled
> works, but any attempt to use v3 functionality fails with 404 because
> keystoneclient falls back to using the v2.0 endpoint from the catalog.
>
> So to work around this I do this:
>
> client = kc_v3.Client(auth_url=v3_endpoint, endpoint=v3_endpoint, ...
> client.authenticate()
>
> Which results in the v3 features working OK.
>
> So my questions are:
> - Is this a reasonable workaround for production environments?
> - What is the roadmap for moving keystone endpoints to be version agnostic?
> - Is there work ongoing to make the client smarter in terms of figuring out
>   what URL to use (version negotiation or substituting the appropriate path
>   when we are in an environment with a legacy v2.0 endpoint..)
>
>
I'd like to understand the ramifications of
https://review.openstack.org/#/c/62801/ so I have a few questions:

- If keystone service catalog endpoints become version agnostic, does that
make other project's support of multiple versions of the Identity API
easier?

- If the client gets smarter, does that automatically let Heat support
Identity v2? Or is more work required in Heat after your blueprint at [1]
is complete?

I saw a brief discussion at project meeting Jan 14 [3] but I didn't see any
questioning of whether it's premature to preclude the use of Identity v2 in
any integrated project.

Can we discuss implications and considerations at the project meeting next
week?
Thanks,
Anne

[3]
http://eavesdrop.openstack.org/meetings/project/2014/project.2014-01-14-21.02.log.html


> 2. Client (CLI) support for v3 API
>
> What is the status re porting keystoneclient to provide access to the v3
> functionality on the CLI?
>
> In particular, Heat is moving towards using domains to encapsulate the
> in-instance users it creates[2], so administrators will require some way to
> manage users in a non-default domain, e.g to get visibility of what Heat is
> doing in that domain and debug in the event of any issues.
>
> If anyone can provide any BP links or insight that would be much
> appreciated!
>
> Thanks,
>
> Steve
>
> [1] https://blueprints.launchpad.net/heat/+spec/keystone-v3-only
> [2] https://wiki.openstack.org/wiki/Heat/Blueprints/InstanceUsers
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] User Signup

2014-02-01 Thread Saju M
Hi folks,

Could you please spend 5 minutes on the blueprint
https://blueprints.launchpad.net/horizon/+spec/user-registration and add
your suggestions in the white board.


Thanks,
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-01 Thread Roman Podoliaka
Hi all,

My two cents.

> 2) Extend alembic so that op.drop_column() does the right thing
We could, but should we?

The only reason alembic doesn't support these operations for SQLite
yet is that SQLite lacks proper support of ALTER statement. For
sqlalchemy-migrate we've been providing a work-around in the form of
recreating of a table and copying of all existing rows (which is a
hack, really).

But to be able to recreate a table, we first must have its definition.
And we've been relying on SQLAlchemy schema reflection facilities for
that. Unfortunately, this approach has a few drawbacks:

1) SQLAlchemy versions prior to 0.8.4 don't support reflection of
unique constraints, which means the recreated table won't have them;

2) special care must be taken in 'edge' cases (e.g. when you want to
drop a BOOLEAN column, you must also drop the corresponding CHECK (col
in (0, 1)) constraint manually, or SQLite will raise an error when the
table is recreated without the column being dropped)

3) special care must be taken for 'custom' type columns (it's got
better with SQLAlchemy 0.8.x, but e.g. in 0.7.x we had to override
definitions of reflected BIGINT columns manually for each
column.drop() call)

4) schema reflection can't be performed when alembic migrations are
run in 'offline' mode (without connecting to a DB)
...
(probably something else I've forgotten)

So it's totally doable, but, IMO, there is no real benefit in
supporting running of schema migrations for SQLite.

> ...attempts to drop schema generation based on models in favor of migrations

As long as we have a test that checks that the DB schema obtained by
running of migration scripts is equal to the one obtained by calling
metadata.create_all(), it's perfectly OK to use model definitions to
generate the initial DB schema for running of unit-tests as well as
for new installations of OpenStack (and this is actually faster than
running of migration scripts). ... and if we have strong objections
against doing metadata.create_all(), we can always use migration
scripts for both new installations and upgrades for all DB backends,
except SQLite.

Thanks,
Roman

On Sat, Feb 1, 2014 at 12:09 PM, Eugene Nikanorov
 wrote:
> Boris,
>
> Sorry for the offtopic.
> Is switching to model-based schema generation is something decided? I see
> the opposite: attempts to drop schema generation based on models in favor of
> migrations.
> Can you point to some discussion threads?
>
> Thanks,
> Eugene.
>
>
>
> On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic 
> wrote:
>>
>> Jay,
>>
>> Yep we shouldn't use migrations for sqlite at all.
>>
>> The major issue that we have now is that we are not able to ensure that DB
>> schema created by migration & models are same (actually they are not same).
>>
>> So before dropping support of migrations for sqlite & switching to model
>> based created schema we should add tests that will check that model &
>> migrations are synced.
>> (we are working on this)
>>
>>
>>
>> Best regards,
>> Boris Pavlovic
>>
>>
>> On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev 
>> wrote:
>>>
>>> Trevor,
>>>
>>> Such check could be useful on alembic side too. Good opportunity for
>>> contribution.
>>>
>>> Andrew.
>>>
>>>
>>> On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay  wrote:

 Okay,  I can accept that migrations shouldn't be supported on sqlite.

 However, if that's the case then we need to fix up savanna-db-manage so
 that it checks the db connection info and throws a polite error to the
 user for attempted migrations on unsupported platforms. For example:

 "Database migrations are not supported for sqlite"

 Because, as a developer, when I see a sql error trace as the result of
 an operation I assume it's broken :)

 Best,

 Trevor

 On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote:
 > On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
 > > I was playing with alembic migration and discovered that
 > > op.drop_column() doesn't work with sqlite.  This is because sqlite
 > > doesn't support dropping a column (broken imho, but that's another
 > > discussion).  Sqlite throws a syntax error.
 > >
 > > To make this work with sqlite, you have to copy the table to a
 > > temporary
 > > excluding the column(s) you don't want and delete the old one,
 > > followed
 > > by a rename of the new table.
 > >
 > > The existing 002 migration uses op.drop_column(), so I'm assuming
 > > it's
 > > broken, too (I need to check what the migration test is doing).  I
 > > was
 > > working on an 003.
 > >
 > > How do we want to handle this?  Three good options I can think of:
 > >
 > > 1) don't support migrations for sqlite (I think "no", but maybe)
 > >
 > > 2) Extend alembic so that op.drop_column() does the right thing
 > > (more
 > > open-source contributions for us, yay :) )
 > >
 > > 3) Add our own

Re: [openstack-dev] Question about Nova BP.

2014-02-01 Thread wingwj
It's very kind of you.
Ok, we'll encourage the company to build the CI environment rapidly.

Thanks for all your recommendation for me. I'll feed it back to my team. 

Best wishes,
WingWJ


> 在 2014年2月1日,3:12,Russell Bryant  写道:
> 
>> On 01/31/2014 01:11 PM, Joe Gordon wrote:
>> Including openstack-dev ML in response.
>> 
>> 
>>> On Fri, Jan 31, 2014 at 8:14 AM, wingwj  wrote:
>>> Hi, Mr Gordon,
>>> 
>>> Firstly, sorry for my lately reply for this BP..
>>> https://blueprints.launchpad.net/nova/+spec/driver-for-huawei-fusioncompute
>>> 
>>> Honestly speaking, we wrote the first FusionCompute Nova-driver on Folsom 
>>> edition, and now it has been updated with Havana. We maintained by 
>>> ourselves.
>>> 
>>> Now I have a question about your suggestion in whiteboard of this BP:
>>> Is the CI environment a required term for this BP? Now Huawei is preparing 
>>> the CI environment for Nova & Neutron.
>>> But due to the company's policy, it's not a easy thing to realize it 
>>> rapidly. We'll try our best for it.
>> 
>> Yes, CI is a requirement for adding  a new driver, please see:
>> 
>> https://wiki.openstack.org/wiki/HypervisorSupportMatrix
>> http://lists.openstack.org/pipermail/openstack-dev/2013-July/011260.html
>> https://wiki.openstack.org/wiki/HypervisorSupportMatrix/DeprecationPlan
>> 
>>> 
>>> So can we commit the codes, and prepare the CI at the same time?
>> 
>> That's a good question, I don't think that is feasible for Icehouse,
>> but as far as I know we haven't fully discussed how to introduce new
>> drivers now that we have the third party testing requirement.
> 
> I think the way to do this is to put the driver in its own repo on
> stackforge, and demonstrate CI on that.
> 
>> An alternate option is to add FusionCompute support to libvirt, and
>> since nova already supports libvirt you will get nova support
>> automatically.
> 
> Huge +1.  That's really the ideal answer to support for any new
> hypervisor, unless there's some compelling reason why it's not an option.
> 
> -- 
> Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [TripleO] adding process/service monitoring

2014-02-01 Thread Łukasz Jernaś
On Tue, Jan 28, 2014 at 6:49 PM, Ladislav Smola  wrote:
> Might be good to monitor it via SNMPD. As this daemon will be
> already running on each node. And I see it should be possible, though
> not very popular.
>
> Then it would be nice to have the data stored in Ceilometer, as
> it provides generic backed for storing samples and querying them.
> (would be nice to have history of those samples) It should be enough
> to sent it in correct format to notification bus and Ceilometer will store
> it.
> For now, Tuskar would just grab it from Ceilometer.
>
> The problem here is that every node can have different services running
> so you would have to write some smart inspector that would know what
> is running where. We have been talking about exposing these kind of
> information in Glance, so it would return you list of services for image.
> Then you would get list of nodes for image and you can poll them via SNMP.
> This could be probably inspector of central agent, same approach as for
> getting the baremetal metrics.

Hi,

I'm a bit new here, so please excuse me if I state the obvious or
speak gibberish.

One problem with using only SNMPD for querying might be that by
default it only exposes the running process and it's state, which
might not fully describe if it's working at all, as some processes
tend to look alive in the ps output but aren't doing anything or are
stuck at some code path. So in an ideal world you'd require some other
form a health check which isn't as easily exposed via snmpd conf
changes and some boilerplate code to expose that data via an OID.

As for writing a separate daemon for that, I was under the impressions
that ceilometer agents on hosts (clarification: with hosst I mean the
actual hardware running nova,glance,etc) would provide those
capabilities via pollsters/plugins instead of having other entities to
keep track of (the usual who monitors the monitoring process problem),
which then in turn might be exposed via SNMP for other tools to
collect if needed.

But if expectations are wrong feel free to correct me.

Have a nice day,
-- 
Łukasz [DeeJay1] Jernaś

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Barbican Incubation Review

2014-02-01 Thread Sergey Lukjanov
Probably, while you're not incubated, it'll be better to place this code
into your repo (example:
https://github.com/stackforge/solum/tree/master/contrib/devstack).


On Sat, Feb 1, 2014 at 5:43 AM, Chad Lung  wrote:

>
> This is a follow-up to Jarret Raim's email regarding Barbican's incubation
> review:
>
> http://lists.openstack.org/pipermail/openstack-dev/2014-January/025860.html
>
> Please note that the PR for Barbican's DevStack integration can now be
> found here:
>
> https://review.openstack.org/#/c/70512/
>
> Thanks for any feedback or comments.
>
> Chad Lung
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-01 Thread Eugene Nikanorov
Boris,

Sorry for the offtopic.
Is switching to model-based schema generation is something decided? I see
the opposite: attempts to drop schema generation based on models in favor
of migrations.
Can you point to some discussion threads?

Thanks,
Eugene.



On Sat, Feb 1, 2014 at 2:19 AM, Boris Pavlovic wrote:

> Jay,
>
> Yep we shouldn't use migrations for sqlite at all.
>
> The major issue that we have now is that we are not able to ensure that DB
> schema created by migration & models are same (actually they are not same).
>
> So before dropping support of migrations for sqlite & switching to model
> based created schema we should add tests that will check that model &
> migrations are synced.
> (we are working on this)
>
>
>
> Best regards,
> Boris Pavlovic
>
>
> On Fri, Jan 31, 2014 at 7:31 PM, Andrew Lazarev wrote:
>
>> Trevor,
>>
>> Such check could be useful on alembic side too. Good opportunity for
>> contribution.
>>
>> Andrew.
>>
>>
>> On Fri, Jan 31, 2014 at 6:12 AM, Trevor McKay  wrote:
>>
>>> Okay,  I can accept that migrations shouldn't be supported on sqlite.
>>>
>>> However, if that's the case then we need to fix up savanna-db-manage so
>>> that it checks the db connection info and throws a polite error to the
>>> user for attempted migrations on unsupported platforms. For example:
>>>
>>> "Database migrations are not supported for sqlite"
>>>
>>> Because, as a developer, when I see a sql error trace as the result of
>>> an operation I assume it's broken :)
>>>
>>> Best,
>>>
>>> Trevor
>>>
>>> On Thu, 2014-01-30 at 15:04 -0500, Jay Pipes wrote:
>>> > On Thu, 2014-01-30 at 14:51 -0500, Trevor McKay wrote:
>>> > > I was playing with alembic migration and discovered that
>>> > > op.drop_column() doesn't work with sqlite.  This is because sqlite
>>> > > doesn't support dropping a column (broken imho, but that's another
>>> > > discussion).  Sqlite throws a syntax error.
>>> > >
>>> > > To make this work with sqlite, you have to copy the table to a
>>> temporary
>>> > > excluding the column(s) you don't want and delete the old one,
>>> followed
>>> > > by a rename of the new table.
>>> > >
>>> > > The existing 002 migration uses op.drop_column(), so I'm assuming
>>> it's
>>> > > broken, too (I need to check what the migration test is doing).  I
>>> was
>>> > > working on an 003.
>>> > >
>>> > > How do we want to handle this?  Three good options I can think of:
>>> > >
>>> > > 1) don't support migrations for sqlite (I think "no", but maybe)
>>> > >
>>> > > 2) Extend alembic so that op.drop_column() does the right thing (more
>>> > > open-source contributions for us, yay :) )
>>> > >
>>> > > 3) Add our own wrapper in savanna so that we have a drop_column()
>>> method
>>> > > that wraps copy/rename.
>>> > >
>>> > > Ideas, comments?
>>> >
>>> > Migrations should really not be run against SQLite at all -- only on
>>> the
>>> > databases that would be used in production. I believe the general
>>> > direction of the contributor community is to be consistent around
>>> > testing of migrations and to not run migrations at all in unit tests
>>> > (which use SQLite).
>>> >
>>> > Boris (cc'd) may have some more to say on this topic.
>>> >
>>> > Best,
>>> > -jay
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Stability Hack-a-thon

2014-02-01 Thread Huang Zhiteng
On Sat, Feb 1, 2014 at 4:06 PM, Mike Perez  wrote:
> Folks,
>
> I would love to get people together who are interested in Cinder stability
> to really dedicate a few days. This is not for additional features, but
> rather
> finishing what we already have and really getting those in a good shape
> before the end of the release.
>
> When: Feb 24-26
> Where: San Francisco (DreamHost Office can host), Colorado, remote?
>
> Some ideas that come to mind:
>
> - Cleanup/complete volume retype
> - Cleanup/complete volume migration [1][2]
> - Other ideas that come from this thread.
>
> I can't stress the dedicated part enough. I think if we have some folks
> from core and anyone interested in contributing and staying focus, we
> can really get a lot done in a few days with small set of doable stability
> goals
> to stay focused on. If there is enough interest, being together in the
> mentioned locations would be great, otherwise remote would be fine as
> long as people can stay focused and communicate through suggested
> ideas like team speak or google hangout.
>
> What do you guys think? Location? Other stability concerns to add to the
> list?
>
+1, will join.
> [1] - https://bugs.launchpad.net/cinder/+bug/1255622
> [2] - https://bugs.launchpad.net/cinder/+bug/1246200
>
>
> -Mike Perez
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards
Huang Zhiteng

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cinder Stability Hack-a-thon

2014-02-01 Thread Mike Perez
Folks,

I would love to get people together who are interested in Cinder stability
to really dedicate a few days. This is not for additional features, but
rather
finishing what we already have and really getting those in a good shape
before the end of the release.

When: Feb 24-26
Where: San Francisco (DreamHost Office can host), Colorado, remote?

Some ideas that come to mind:

- Cleanup/complete volume retype
- Cleanup/complete volume migration [1][2]
- Other ideas that come from this thread.

I can't stress the dedicated part enough. I think if we have some folks
from core and anyone interested in contributing and staying focus, we
can really get a lot done in a few days with small set of doable stability
goals
to stay focused on. If there is enough interest, being together in the
mentioned locations would be great, otherwise remote would be fine as
long as people can stay focused and communicate through suggested
ideas like team speak or google hangout.

What do you guys think? Location? Other stability concerns to add to the
list?

[1] - https://bugs.launchpad.net/cinder/+bug/1255622
[2] - https://bugs.launchpad.net/cinder/+bug/1246200


-Mike Perez
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev