Re: [openstack-dev] [Solum] Stackforge Repo Ready

2013-11-01 Thread Noorul Islam K M
Noorul Islam K M noo...@noorul.com writes:

 Adrian Otto adrian.o...@rackspace.com writes:

 Team,

 Our StackForge code repo is open, so you may begin submitting code for 
 review. For those new to the process, I made a will page with links to the 
 repo and information about how to contribute:

 https://wiki.openstack.org/wiki/Solum/Contributing


 1. .gitreview file is missing, so I submitted a patch 

 https://review.openstack.org/#/c/54877

 This patch also contains update to README to include relevant project
 information.

 2. My review request got rejected by Jenkins. A re-base against [1] is
not helping.

 3. Github repo [2] is behind [1]. It is not mirrored yet?


I cross verified and it looks like they are in sync. Also I think this
is the first review after the initial repository setup. Did we miss
anything that is causing Jenkins to fail?

Thanks and Regards
Noorul


 [1] git://git.openstack.org/stackforge/solum
 [2] https://github.com/stackforge/solum

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-Dev] Announcement of the Compass Deployment project

2013-11-01 Thread Robert Collins
On 1 November 2013 20:41, Rochelle Grober roc...@gmail.com wrote:

 A message from my associate as he wings to the Icehouse OpenStack summit
 (and yes, we're psyched):
 Our project, code named Compass is a Restful API driven deployment platform
 that performs discovery of the physical machines attached to a specified set
 of switches. It then customizes configurations for machines you identify and
 installs the systems and networks to your configuration specs. Besides
 presenting the technical internals and design decisions of Compass  at the
 Icehouse summit, we will also have a  demo session.

Cool - when is it? I'd like to get along.

...
 We look forward to showing the community our project, receiving and
 incorporating, brainstorming what else it could do, and integrating it into
 the OpenStack family .  We are a part of the OpenStack community and want to
 support it both with core participation and with Compass.

I'm /particularly/ interested in the interaction with Neutron and
network modelling - do you use Neutron for the physical switch
interrogation, do you inform Neutron about the topology and so on.

Anyhow, lets make sure we can connect and see where we can collaborate!

Cheers,
Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Help with XML Tempest API tests

2013-11-01 Thread Steven Hardy
On Thu, Oct 31, 2013 at 09:40:46PM -0400, Adam Young wrote:
snip
 I think it is safe to say that the trusts API is broken in XML.  I
 added the following test:
 
 diff --git a/keystone/tests/test_v3_auth.py b/keystone/tests/test_v3_auth.py
 index c0e191b..6a0c10c 100644
 --- a/keystone/tests/test_v3_auth.py
 +++ b/keystone/tests/test_v3_auth.py
 @@ -2238,3 +2238,7 @@ class TestTrustAuth(TestAuthInfo):
  self.get('/OS-TRUST/trusts?trustor_user_id=%s' %
   self.user_id, expected_status=401,
   token=trust_token)
 +
 +
 +class TestTrustAuthXML(TestTrustAuth):
 +content_type = 'xml'
 
 And, when running it, I got:
 
 
 Ran 24 tests in 5.832s
 
 FAILED (SKIP=1, errors=12)
 
 
 https://bugs.launchpad.net/keystone/+bug/1246941

Great (well not great that we have a bug, but great that all this effort
going into testing is finding some real bugs! :) )

I notice there's a review associated with that bug, but I can't view it -
if it's draft can you please add me to the reviewers list?

Looking forward to seeing the patch, as you said the unit test examples
should help me complete my Tempest patch.

Also note I raised:

https://bugs.launchpad.net/keystone/+bug/1246831

Which seems somewhat related to this (we get a 500 with the XML encoded
expires_at=None, which results in a zero-length string, so the trust
controller treats it as a valid timestamp instead of ignoring it)

I was planning to send a patch for the latter, but seems like it may
overlap with your XML fixes, so I'll hold off for now.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Abdicating the PTL Position

2013-11-01 Thread Thierry Carrez
Gabriel Hurley wrote:
 It saddens me to say that for a mix of reasons I have decided to abdicate my 
 position as PTL for Horizon. If anything, the reasons are all good ones 
 overall, I just have to make the right decision for both myself and the 
 project.

I'm sorry to hear that ! That said, stepping down gracefully is a key
aspect of our meritocratic governance, and I'm delighted to see some new
community members rise to leadership positions.

 In the interim David Lyle will be the acting PTL. The Horizon core team has 
 all weighed in with their confidence in his abilities, and he has confirmed 
 his ability and interest in doing so. There will be a nomination period in 
 coming weeks to determine the PTL for the full release cycle, should anyone 
 else wish to run for the job as well. Thierry will announce more information 
 about that soon.

We'll be running new elections immediately. The electorate will be the
Foundation individual members that are also authors of changes for
Horizon over the last 12 months (from 2012-11-01 to 2013-10-31, 23:59 UTC).

Any member of that electorate can propose his/her candidacy for the
election. No nomination is required. They do so by sending an email to
the openstack-dev@lists.openstack.org mailing-list, which the subject:
Horizon PTL candidacy. The email can include a description of the
candidate platform.

The deadline for self-nomination is Friday, November 8, 23:59 UTC.
If more than one candidate is declared, elections will then be run over
the November 11 week.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-Dev] Announcement of the Compass Deployment project

2013-11-01 Thread Dmitry Mescheryakov
I've noticed you list Remote install and configure a Hadoop cluster
(synergy with Savanna?) among possible use cases. Recently there was a
discussion about Savanna on bare metal provisioning through Nova (see
thread [1]). Nobody tested that yet, but it was concluded that it should
work without any changes in Savanna code.

So if Compass could set up baremetal provisioning with Nova, possibly
Savanna will work on top of that out of the box.

Dmitry

[1]
http://lists.openstack.org/pipermail/openstack-dev/2013-October/017438.html


2013/11/1 Robert Collins robe...@robertcollins.net

 On 1 November 2013 20:41, Rochelle Grober roc...@gmail.com wrote:
 
  A message from my associate as he wings to the Icehouse OpenStack summit
  (and yes, we're psyched):
  Our project, code named Compass is a Restful API driven deployment
 platform
  that performs discovery of the physical machines attached to a specified
 set
  of switches. It then customizes configurations for machines you identify
 and
  installs the systems and networks to your configuration specs. Besides
  presenting the technical internals and design decisions of Compass  at
 the
  Icehouse summit, we will also have a  demo session.

 Cool - when is it? I'd like to get along.

 ...
  We look forward to showing the community our project, receiving and
  incorporating, brainstorming what else it could do, and integrating it
 into
  the OpenStack family .  We are a part of the OpenStack community and
 want to
  support it both with core participation and with Compass.

 I'm /particularly/ interested in the interaction with Neutron and
 network modelling - do you use Neutron for the physical switch
 interrogation, do you inform Neutron about the topology and so on.

 Anyhow, lets make sure we can connect and see where we can collaborate!

 Cheers,
 Rob



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread John Garbutt
Its intentional. Cells is there to split up your nodes into more
manageable chunks.

There are quite a few design summit sessions on looking into
alternative approaches to our current scheduler.

While I would love a single scheduler to make everyone happy, I am
thinking we might end up with several scheduler, each with slightly
different properties, and you pick one depending on what you want to
do with your cloud.

John

On 31 October 2013 22:39, Jiang, Yunhong yunhong.ji...@intel.com wrote:
 I noticed several filters (AggregateMultiTenancyIsoaltion, ram_filter, 
 type_filter, AggregateInstanceExtraSpecsFilter) have DB access in the 
 host_passes(). Some will even access for each invocation.

 Just curios if this is considered a performance issue? With a 10k nodes, 60 
 VM per node, and 3 hours VM life cycle cloud, it will have more than 1 
 million DB access per second. Not a small number IMHO.

 Thanks
 --jyh

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Help with XML Tempest API tests

2013-11-01 Thread Sean Dague

On 11/01/2013 04:58 AM, Steven Hardy wrote:

On Thu, Oct 31, 2013 at 09:40:46PM -0400, Adam Young wrote:
snip

I think it is safe to say that the trusts API is broken in XML.  I
added the following test:

diff --git a/keystone/tests/test_v3_auth.py b/keystone/tests/test_v3_auth.py
index c0e191b..6a0c10c 100644
--- a/keystone/tests/test_v3_auth.py
+++ b/keystone/tests/test_v3_auth.py
@@ -2238,3 +2238,7 @@ class TestTrustAuth(TestAuthInfo):
  self.get('/OS-TRUST/trusts?trustor_user_id=%s' %
   self.user_id, expected_status=401,
   token=trust_token)
+
+
+class TestTrustAuthXML(TestTrustAuth):
+content_type = 'xml'

And, when running it, I got:


Ran 24 tests in 5.832s

FAILED (SKIP=1, errors=12)


https://bugs.launchpad.net/keystone/+bug/1246941


Great (well not great that we have a bug, but great that all this effort
going into testing is finding some real bugs! :) )

I notice there's a review associated with that bug, but I can't view it -
if it's draft can you please add me to the reviewers list?

Looking forward to seeing the patch, as you said the unit test examples
should help me complete my Tempest patch.

Also note I raised:

https://bugs.launchpad.net/keystone/+bug/1246831

Which seems somewhat related to this (we get a 500 with the XML encoded
expires_at=None, which results in a zero-length string, so the trust
controller treats it as a valid timestamp instead of ignoring it)

I was planning to send a patch for the latter, but seems like it may
overlap with your XML fixes, so I'll hold off for now.


My experience with the nova API and adding XML testing is that any 
service's XML API is broken by default (because the underlying logic is 
pretty JSON skewed, and the python clients talk JSON). So this isn't 
very surprising. Thanks again for diving into it!


Honestly, one of these days we should have another serious conversation 
about dropping XML entirely again (across all projects). A single data 
payload that works is way better than additional payloads that don't.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] changing old migrations is verboten

2013-11-01 Thread John Garbutt
On 31 October 2013 16:57, Johannes Erdfelt johan...@erdfelt.com wrote:
 On Thu, Oct 31, 2013, Sean Dague s...@dague.net wrote:
 So there is a series of patches starting with -
 https://review.openstack.org/#/c/53417/ that go back and radically
 change existing migration files.

I initially agreed with the -2, but actually I like this change, but I
will get to that later.

 This is really a no-no, unless there is a critical bug fix that
 absolutely requires it. Changing past migrations should be
 considered with the same level of weight as an N-2 backport, only
 done when there is huge upside to the change.

 I've -2ed the first 2 patches in the series, though that review
 applies to all of them (I figured a mailing list thread was probably
 more useful than -2ing everything in the series).

 There needs to be really solid discussion about the trade offs here
 before contemplating something as dangerous as this.

 The most important thing for DB migrations is that they remain
 functionality identical.

+1

We really should never change what the migrations functionally do.

Admittedly we should ensure we don't change something by accident,
so I agree with minimizing the changes in those files also.

 Historically we have allowed many changes to DB migrations that kept
 them functionally identical to how they were before.

 Looking through the commit history, here's a sampling of changes:

 - _ was no longer monkey patched, necessitating a new import added
 - fix bugs causing testing problems
 - change copyright headers
 - remove unused code (creating logger, imports, etc)
 - fix bugs causing the migrations to fail to function (on PostgreSQL,
   downgrade bugs, etc)
 - style changes (removing use of locals(), whitespace, etc)
 - make migrations faster
 - add comments to clarify code
 - improve compatibility with newer versions of SQLAlchemy

 The reviews you're referencing seem to fall into what we have
 historically allowed.

+1 The patch is really just refactoring.

I think we should move to the more descriptive field names, so we
remove the risk of cut and paste errors in string length, etc.

Now, if we don't go back and add those into the migrations, people
will just cut and paste examples from the old migrations, and
everything will start getting quite confusing. I would love to say
that wasn't true, be we know that's how it goes.

 That said, I do agree there needs to be a higher burden of proof that
 the change being made is functionally identical to before.

+1 and Rick said he has inspected the MySQL and PostgreSQL tables to
ensure he didn't change anything.

Cheers,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Joe Gordon
On Nov 1, 2013 10:20 AM, John Garbutt j...@johngarbutt.com wrote:

 Its intentional. Cells is there to split up your nodes into more
 manageable chunks.

 There are quite a few design summit sessions on looking into
 alternative approaches to our current scheduler.

 While I would love a single scheduler to make everyone happy, I am
 thinking we might end up with several scheduler, each with slightly
 different properties, and you pick one depending on what you want to
 do with your cloud.

Agreed.


 John

 On 31 October 2013 22:39, Jiang, Yunhong yunhong.ji...@intel.com wrote:
  I noticed several filters (AggregateMultiTenancyIsoaltion, ram_filter,
type_filter, AggregateInstanceExtraSpecsFilter) have DB access in the
host_passes(). Some will even access for each invocation.

As you noticed, not all filters make sense for a large system.

 
  Just curios if this is considered a performance issue? With a 10k
nodes, 60 VM per node, and 3 hours VM life cycle cloud, it will have more
than 1 million DB access per second. Not a small number IMHO.
 
  Thanks
  --jyh
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Help with XML Tempest API tests

2013-11-01 Thread Christopher Yeoh
On Fri, Nov 1, 2013 at 8:55 PM, Sean Dague s...@dague.net wrote:

 On 11/01/2013 04:58 AM, Steven Hardy wrote:

going into testing is finding some real bugs! :) )

 I notice there's a review associated with that bug, but I can't view it -
 if it's draft can you please add me to the reviewers list?

 Looking forward to seeing the patch, as you said the unit test examples
 should help me complete my Tempest patch.

 Also note I raised:

 https://bugs.launchpad.net/**keystone/+bug/1246831https://bugs.launchpad.net/keystone/+bug/1246831

 Which seems somewhat related to this (we get a 500 with the XML encoded
 expires_at=None, which results in a zero-length string, so the trust
 controller treats it as a valid timestamp instead of ignoring it)

 I was planning to send a patch for the latter, but seems like it may
 overlap with your XML fixes, so I'll hold off for now.


 My experience with the nova API and adding XML testing is that any
 service's XML API is broken by default (because the underlying logic is
 pretty JSON skewed, and the python clients talk JSON). So this isn't very
 surprising. Thanks again for diving into it!

 Honestly, one of these days we should have another serious conversation
 about dropping XML entirely again (across all projects). A single data
 payload that works is way better than additional payloads that don't.


+1 !
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-11-01 Thread John Garbutt
On 29 October 2013 16:11, Eddie Sheffield eddie.sheffi...@rackspace.com wrote:

 John Garbutt j...@johngarbutt.com said:

 Going back to Joe's comment:
 Can both of these cases be covered by configuring the keystone catalog?
 +1

 If both v1 and v2 are present, pick v2, otherwise just pick what is in
 the catalogue. That seems cool. Not quite sure how the multiple glance
 endpoints works in the keystone catalog, but should work I assume.

 We hard code nova right now, and so we probably want to keep that route too?

 Nova doesn't use the catalog from Keystone when talking to Glance. There is a 
 config value glance_api_servers which defines a list of Glance servers that 
 gets randomized and cycled through. I assume that's what you're referring to 
 with we hard code nova. But currently there's nowhere in this path 
 (internal nova to glance) where the keystone catalog is available.

Yes. I was not very clear. I am proposing we change that. We could try
shoehorn the multiple glance nodes in the keystone catalog, then cache
that in the context, but maybe that doesn't make sense. This is a
separate change really.

But clearly, we can't drop the direct configuration of glance servers
for some time either.

 I think some of the confusion may be that Glanceclient at the programmatic 
 client level doesn't talk to keystone. That happens happens higher in the CLI 
 level which doesn't come into play here.

 From: Russell Bryant rbry...@redhat.com
 On 10/17/2013 03:12 PM, Eddie Sheffield wrote:
 Might I propose a compromise?

 1) For the VERY short term, keep the config value and get the change 
 otherwise
 reviewed and hopefully accepted.

 2) Immediately file two blueprints:
- python-glanceclient - expose a way to discover available versions
- nova - depends on the glanceclient bp and allowing autodiscovery of 
 glance
 version
 and making the config value optional (tho not deprecated / 
 removed)

 Supporting both seems reasonable.  At least then *most* people don't
 need to worry about it and it just works, but the override is there if
 necessary, since multiple people seem to be expressing a desire to have
 it available.

 +1

 Can we just do this all at once?  Adding this to glanceclient doesn't
 seem like a huge task.

 I worry about us never getting the full solution, but it seems to have
 got complicated.

 The glanceclient side is done, as far as allowing access to the list of 
 available API versions on a given server. It's getting Nova to use this info 
 that's a bit sticky.

Hmm, OK. Could we not just cache the detected version, to reduce the
impact of that decision.

 On 28 October 2013 15:13, Eddie Sheffield eddie.sheffi...@rackspace.com 
 wrote:
 So...I've been working on this some more and hit a bit of a snag. The
 Glanceclient change was easy, but I see now that doing this in nova will 
 require
 a pretty huge change in the way things work. Currently, the API version is
 grabbed from the config value, the appropriate driver is instantiated, and 
 calls
 go through that. The problem comes in that the actually glance server isn't
 communicated with until very late in the process. Nothing sees the 
 servers at
 the level where the driver is determined. Also there isn't a single glance 
 server
 but a list of them, and in the even of certain communication failures the 
 list is
 cycled through until success or a number of retries has passed.

 So to change this to auto configuring will require turning this upside down,
 cycling through the servers at a higher level, choosing the appropriate 
 driver
 for that server, and handling retries at that same level.

 Doable, but a much larger task than I first was thinking.

 Also, I don't really want the added overhead of getting the api versions 
 before
 every call, so I'm thinking that going through the list of servers at 
 startup and
 discovering the versions then and caching that somehow would be helpful as 
 well.

 Thoughts?

 I do worry about that overhead. But with Joe's comment, does it not
 just boil down to caching the keystone catalog in the context?

 I am not a fan of all the specific talk to glance code we have in
 nova, moving more of that into glanceclient can only be a good thing.
 For the XenServer itegration, for efficiency reasons, we need glance
 to talk from dom0, so it has dom0 making the final HTTP call. So we
 would need a way of extracting that info from the glance client. But
 that seems better than having that code in nova.

 I know in Glance we've largely taken the view that the client should be as 
 thin and lightweight as possible so users of the client can make use of it 
 however they best see fit. There was an earlier patch that would have moved 
 the whole image service layer into glanceclient that was rejected. So I think 
 there is a division in philosophies here as well

Hmm, I would be a fan of supporting both use cases, nova style and
more complex. Just seems better for glance to own as much as 

Re: [openstack-dev] [Nova]Ideas of idempotentcy-client-token

2013-11-01 Thread John Garbutt
On 30 October 2013 01:51, haruka tanizawa harube...@gmail.com wrote:
 Hi John!

 Thank you for your reply:)
 Sorry for inline comment.


 We also need something that doesn't clash with the cross-service
 request id, as that is doing something slightly different. Would
 idempotent-request-id work better?

 Oh, yes.
 Did you say about this BP(
 https://blueprints.launchpad.net/nova/+spec/cross-service-request-id )?
 (I am going to go that HK session.)
 So, I will user your opinion, and I try to go forward.


 Also, I assume we are only adding this into the v3 API? We should
 close the v2 API for additions I guess?

 Now I only adapt into v2 API, so it is aloso necessary to cope with the v3
 API.
 Did I answer your question?
We certainly need to add it into the v3 API, all new features must go there.

However, to ensure the v3 API gets released in Icehouse, I would love
to close the v2 API for changes, but perhaps I am being too harsh, and
we should certainly only do that after the point where we promise not
to back backwards in-compatible changes in the v3 API.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-11-01 Thread Rosa, Andrea (HP Cloud Services)

-Original Message-
From: Chris Friesen [mailto:chris.frie...@windriver.com]
Sent: 31 October 2013 17:07
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] When is it okay for submitters to say 'I don't
want to add tests' ?

On 10/31/2013 06:04 AM, Rosa, Andrea (HP Cloud Services) wrote:

 A - there is no test suite at all, adding one in unreasonable B -
 this thing cannot be tested in this context (e.g. functional tests
 are defined in a different tree) C - this particular thing is very
 hard to test
  D - testing this won't offer benefit

 In my opinion  C instead of being an acceptable reason for not having tests 
 is
a symptom of one of the two things:
 1) F = Submitter doesn't know how to write tests, in this case
 someone else can help with suggestions
 2) The code we are trying to test is too complicated so it's time to
 refactor it

 And about D, In my opinion  tests always offer benefits, like code coverage
or helping in understanding the code.

I think there are actually cases where C is valid.  It's difficult to test 
certain
kinds of race conditions, for example, unless you have very low-level hooks
into the guts of the system in order to force the desired conditions to 
reliably
occur at exactly the right time.

Well depends which kind of tests we are talking about. 
I was talking about unit tests and I totally agree with Sandy when he said that 
everything can be tested and should be.
Test certain kinds of race conditions those kind of tests not always are unit 
tests, I'd consider them functional tests.

Regards
--
Andrea Rosa



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Closes-Bug and Launchpad

2013-11-01 Thread Julie Pichon
Hi Gary,

Gary Kotton gkot...@vmware.com wrote:
 Hi,
 Over the last few days I have noticed that bug fixes posted to gerrit are not
 updated in Launchpad. Am I doing something wrong? I think that the commit
 message is the correct format: Closes-Bug: #bug number.
 Any ideas?

I've seen the linking fail when the bug number is followed by a period.

That seems to match the regexp, if I'm looking at the right place:

https://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/update_bug.py#n250

Julie

 Thanks
 Gary
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Instance Group Model and APIs - Updated document with an example request payload

2013-11-01 Thread John Garbutt
On 29 October 2013 20:18, Mike Spreitzer mspre...@us.ibm.com wrote:
 John Garbutt j...@johngarbutt.com wrote on 10/29/2013 07:29:19 AM:
 ...

 Its looking good, but I was thinking about a slightly different approach:

 * I would like to see instance groups be used to describe all
 scheduler hints (including, please run on cell X, or please run on
 hypervisor Y)

 I think Yathi's proposal is open in the sense that any type of policy can
 appear (we only have to define the policy types :-).  Removing old features
 from the existing API is something that would have to be done over time, if
 at all.

I think its important we unite the old and new world behind a single
backend implementation, in this case.

 * passing old scheduler hints to the API will just create a new
 instance group to persist the request
 Yes, implementation re-org is easier that retiring old API.

I would save remove the hints in v3, but I like them too much. They
are a nice shorthand.

 * ensure live-migrate/migrate never lets you violate the rules in the
 user hints, at least don't allow it to happen by accident

 Right, that's why we are persisting the policy information.

Sure, I just wanted to raise that. Its why I would like to see the
exisiting hints migrated to the new persisted policy world.

 * I was expecting to see hard and soft constraints/hints, like: try
 keep in same switch, but make sure on separate servers

 Good point, I forgot to mention that in my earlier reviews of the model!

No worries.

 * Would be nice to have admin defined global options, like: ensure
 tenant does note have two servers on the same hypervisor or soft

 That's the second time I have seen that idea in a week, there might be
 something to it.

I think it could replace some of the existing filters, in a nice
descriptive way, that could open up the ability of users to override
such a decision, in a controlled way.

 * I expected to see the existing boot server command simply have the
 addition of a reference to a group, keeping the existing methods of
 specifying multiple instances

 That was my expectation too, for how a 2-stage API would work.  (A 1-stage
 API would not have the client making distinct calls to create the
 instances.)

Yes, the 1-stage would amount to scheduler hints today, that then
auto-create the instance groups.

 * I aggree you can't change a group's spec once you have started some
 VMs in that group, but you could then simply launch more VMs keeping
 to the same policy

 Not if a joint decision was already made based on the totality of the group.

So in some cases you would have to reject the request, but in some
cases you might be able to spread out a few extra VMs, without moving
the old ones. I am thinking about people setting up their hadoop
system, then looking to quickly add capacity into particular existing
clusters.

 * augment the server details (and group?) with more location
 information saying where the scheduler actually put things, obfuscated
 on per tenant basis. So imagine nova, cinder, neutron exposing ordered
 (arbitrary tagged) location metadata like nova: ((host_id, foo),
 (switch_group_id: bar), (power_group: bas))

 +1

So see these all being used to scope a constraint, like:
* all on the same switch (in the nova sense)
* but also on different hypervisors (in the nova sense)

You could then widen the scope bringing in Cinder and Neutron
location information.
* all on different hypervisors
* close to volume X and volume Y
* able to connect to private network Z

 * the above should help us define the scope of a constraint relative
 to either a nova, cinder or neutron resource.

 I am lost.  What above, what scope definition problem?

Sorry, bad description, hopefully I described the scope better in the
description above?

 * Consider a constraint that includes constraints about groups, like
 must be separate to group X, in the scope of the switch, or something
 like that

 I think Yathi's proposal, with the policy types I suggested, already does a
 lot of stuff like that.  But I do not know what you mean by in the scope of
 the switch.  I think you mean a location constraint, but am not sure which
 switch you have in mind.  I would approach this perhaps a little more
 abstractly, as a collocation constraint between two resources that are known
 to and meaningful to the client (yes, we are starting with Nova only in
 Icehouse, hope to go holistic later).


 * Need more thought on constraints between volumes, servers and
 networks, I don't think edges are the right way to state that, I think
 it would be better as a cross group constraint, where the scope of the
 constraint is related to neutron.

 I need more explanation or concrete examples to understand what problem(s)
 you are thinking of.  We are explicitly limiting ourselves to Nova at first,
 later will add in other services.

I agree we are targeting Nova first, but I would hate to change the
API again, if we don't have to.

My main objection is the 

Re: [openstack-dev] [nova] changing old migrations is verboten

2013-11-01 Thread Sean Dague

On 11/01/2013 06:27 AM, John Garbutt wrote:

On 31 October 2013 16:57, Johannes Erdfelt johan...@erdfelt.com wrote:

On Thu, Oct 31, 2013, Sean Dague s...@dague.net wrote:

So there is a series of patches starting with -
https://review.openstack.org/#/c/53417/ that go back and radically
change existing migration files.


I initially agreed with the -2, but actually I like this change, but I
will get to that later.


This is really a no-no, unless there is a critical bug fix that
absolutely requires it. Changing past migrations should be
considered with the same level of weight as an N-2 backport, only
done when there is huge upside to the change.

I've -2ed the first 2 patches in the series, though that review
applies to all of them (I figured a mailing list thread was probably
more useful than -2ing everything in the series).

There needs to be really solid discussion about the trade offs here
before contemplating something as dangerous as this.


The most important thing for DB migrations is that they remain
functionality identical.


+1

We really should never change what the migrations functionally do.

Admittedly we should ensure we don't change something by accident,
so I agree with minimizing the changes in those files also.


Historically we have allowed many changes to DB migrations that kept
them functionally identical to how they were before.

Looking through the commit history, here's a sampling of changes:

- _ was no longer monkey patched, necessitating a new import added
- fix bugs causing testing problems
- change copyright headers
- remove unused code (creating logger, imports, etc)
- fix bugs causing the migrations to fail to function (on PostgreSQL,
   downgrade bugs, etc)
- style changes (removing use of locals(), whitespace, etc)
- make migrations faster
- add comments to clarify code
- improve compatibility with newer versions of SQLAlchemy

The reviews you're referencing seem to fall into what we have
historically allowed.


+1 The patch is really just refactoring.

I think we should move to the more descriptive field names, so we
remove the risk of cut and paste errors in string length, etc.

Now, if we don't go back and add those into the migrations, people
will just cut and paste examples from the old migrations, and
everything will start getting quite confusing. I would love to say
that wasn't true, be we know that's how it goes.


It's trading one source of bugs for another. I'd love to say we can have 
our cake and eat it to, but we really can't. And I very much fall on the 
side of getting migrations is hard, updating past migrations without 
ever forking the universe is really really hard, and we've completely 
screwed it up in the past, so lets not do it.



That said, I do agree there needs to be a higher burden of proof that
the change being made is functionally identical to before.


+1 and Rick said he has inspected the MySQL and PostgreSQL tables to
ensure he didn't change anything.


So I'm going to call a straight BS on that. In at least one of the cases 
columns were shortened from 256 to 255. In the average case would that 
be an issue? Probably not. However that's a truncation, and a completely 
working system at 256 length for those fields could go to non working 
with data truncation. Data loads matter. And we can't assume anything 
about the data in those fields that isn't enforced by the DB schema itself.


I've watched us mess this up multiple times in the past when we were 
*sure* it was good. And as has been noticed recently, one of the 
collapses changes a fk name (by accident), which broke upgrades to 
havana for a whole class of people.


So I think that we really should put a moratorium on touching past 
migrations until there is some sort of automatic validation that the new 
and old path are the same, with sufficiently complicated data that 
pushes the limits of those fields.


Manual inspection by one person that their environment looks fine has 
never been a sufficient threshold for merging code.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] changing old migrations is verboten

2013-11-01 Thread Daniel P. Berrange
On Fri, Nov 01, 2013 at 07:20:19AM -0400, Sean Dague wrote:
 On 11/01/2013 06:27 AM, John Garbutt wrote:
 On 31 October 2013 16:57, Johannes Erdfelt johan...@erdfelt.com wrote:
 On Thu, Oct 31, 2013, Sean Dague s...@dague.net wrote:
 So there is a series of patches starting with -
 https://review.openstack.org/#/c/53417/ that go back and radically
 change existing migration files.
 
 I initially agreed with the -2, but actually I like this change, but I
 will get to that later.
 
 This is really a no-no, unless there is a critical bug fix that
 absolutely requires it. Changing past migrations should be
 considered with the same level of weight as an N-2 backport, only
 done when there is huge upside to the change.
 
 I've -2ed the first 2 patches in the series, though that review
 applies to all of them (I figured a mailing list thread was probably
 more useful than -2ing everything in the series).
 
 There needs to be really solid discussion about the trade offs here
 before contemplating something as dangerous as this.
 
 The most important thing for DB migrations is that they remain
 functionality identical.
 
 +1
 
 We really should never change what the migrations functionally do.
 
 Admittedly we should ensure we don't change something by accident,
 so I agree with minimizing the changes in those files also.
 
 Historically we have allowed many changes to DB migrations that kept
 them functionally identical to how they were before.
 
 Looking through the commit history, here's a sampling of changes:
 
 - _ was no longer monkey patched, necessitating a new import added
 - fix bugs causing testing problems
 - change copyright headers
 - remove unused code (creating logger, imports, etc)
 - fix bugs causing the migrations to fail to function (on PostgreSQL,
downgrade bugs, etc)
 - style changes (removing use of locals(), whitespace, etc)
 - make migrations faster
 - add comments to clarify code
 - improve compatibility with newer versions of SQLAlchemy
 
 The reviews you're referencing seem to fall into what we have
 historically allowed.
 
 +1 The patch is really just refactoring.
 
 I think we should move to the more descriptive field names, so we
 remove the risk of cut and paste errors in string length, etc.
 
 Now, if we don't go back and add those into the migrations, people
 will just cut and paste examples from the old migrations, and
 everything will start getting quite confusing. I would love to say
 that wasn't true, be we know that's how it goes.
 
 It's trading one source of bugs for another. I'd love to say we can
 have our cake and eat it to, but we really can't. And I very much
 fall on the side of getting migrations is hard, updating past
 migrations without ever forking the universe is really really hard,
 and we've completely screwed it up in the past, so lets not do it.
 
 That said, I do agree there needs to be a higher burden of proof that
 the change being made is functionally identical to before.
 
 +1 and Rick said he has inspected the MySQL and PostgreSQL tables to
 ensure he didn't change anything.
 
 So I'm going to call a straight BS on that. In at least one of the
 cases columns were shortened from 256 to 255. In the average case
 would that be an issue? Probably not. However that's a truncation,
 and a completely working system at 256 length for those fields could
 go to non working with data truncation. Data loads matter. And we
 can't assume anything about the data in those fields that isn't
 enforced by the DB schema itself.
 
 I've watched us mess this up multiple times in the past when we were
 *sure* it was good. And as has been noticed recently, one of the
 collapses changes a fk name (by accident), which broke upgrades to
 havana for a whole class of people.
 
 So I think that we really should put a moratorium on touching past
 migrations until there is some sort of automatic validation that the
 new and old path are the same, with sufficiently complicated data
 that pushes the limits of those fields.

Agreed, automated validation should be a mandatory pre-requisite
for this kind of change. I've done enough mechanical no-op
refactoring changes in the past to know that humans always screw
something up - we're just not good at identifying the needle in a
haystack. For data model upgrade changes this risk is too serious to
ignore.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Locking and ZooKeeper - a space oddysey

2013-11-01 Thread Julien Danjou
On Thu, Oct 31 2013, Monty Taylor wrote:

 Sigh.

 Yay We've added more competing methods of complexity!!!

 Seriously. We now think that rabbit and zookeeper and mysql are ALL needed?

Yes, if you got synchronization problem that Paxos can resolve,
leveraging ZooKeeper is a good idea IMHO.
Depending _always_ on ZooKeeper is maybe not the best call, that's why
I've got on my mind to propose a library in Oslo providing several
drivers solving this synchronization issue. Where one of the driver
could be ZK based.

As for MySQL, let be reassured, it's not needed, you can use PostgreSQL.
;-)

-- 
Julien Danjou
# Free Software hacker # independent consultant
# http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-01 Thread Jaromir Coufal

Hello community,

I was wondering, since there is a lot of people who cannot attend Design 
Sessions, if we can help them to be present at least in some way.


My thinking was going in the way of doing a hangout from the session. 
However hangout has limited audience of 10 people, so what we can do is 
to go 'on air' and do a youtube stream of it. We can paste its link into 
the official etherpad of the session, so it will be spread to targeted 
audience without any limitations then.


Do you think it will be all possible? Is the internet connection 
powerful enough to handle this load?


Or do you have other ideas how to make sessions more available?

Cheers
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] Official Programs tag?

2013-11-01 Thread Ilya Shakhat
Sean,

Currently the grouping is two-layer: the higher is to split between
openstack-hosted and stackforge-hosted projects, the lower is to split
core, incubation, docs, etc. The grouping may be not so accurate since it
needs to comply with the latest changes in integrated / incubated projects
(bug https://bugs.launchpad.net/stackalytics/+bug/1244485).

I see there's a need to have stats for official projects (more correctly to
say projects belonging to official programs
https://wiki.openstack.org/wiki/Programs), and separate that stats from
other openstack-hosted projects.
So will it work to have the grouping like that?
 - Official OpenStack Projects
- core
- integrated
- incubated
 - OpenStack complimentary projects (e.g. infra projects like jeepyb, gitdm)
 - Stackforge projects (everything from github/stackforge)

Thanks,
Ilya


2013/10/27 Robert Collins robe...@robertcollins.net

 On 28 October 2013 04:32, Sean Dague s...@dague.net wrote:
  I've been looking at the stackalytics code and one of the areas that I
 think
  stackalytics has a structural issue is around the project_group tag.
 
  The existing project_group tags of core, incubation, documentation,
  infrastructure, and other are all fine and good, however none of
 these
  actually represent a number I'd like to see, which is Official Programs
  contributions over all.
 
  Which would be core + documentation + infrastructure + QA / Devstack
 trees
  (+ maybe Tripleo? I don't remember if that's Official or Incubated
 program
  at this point).

 TripleO is official

 NB: Incubation happens to projects, not programs.

 -Rob



 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] #openstack IRC meeting UTC 1300 Friday on PCI pass-through network support

2013-11-01 Thread Robert Li (baoli)
Just to clarify that the channel is #openstack-meeting

thanks,
Robert

On 10/30/13 1:57 PM, Robert Li (baoli) 
ba...@cisco.commailto:ba...@cisco.com wrote:

Hi,

Let's have a meeting with the #openstack IRC channel at UTC 1300 Friday.

Based on our email discussions so far, I listed the topics in below:
   -- physical network binding
   -- nova/neutron APIs to support SRIOV
   -- PCI Alias

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-01 Thread Davanum Srinivas
Jarda,

Here are some lessons learned from a friend who is on a team that used
Google Hangout - http://drbacchus.com/google-hangout-lessons-learned

-- dims

On Fri, Nov 1, 2013 at 8:33 AM, Jaromir Coufal jcou...@redhat.com wrote:
 Hello community,

 I was wondering, since there is a lot of people who cannot attend Design
 Sessions, if we can help them to be present at least in some way.

 My thinking was going in the way of doing a hangout from the session.
 However hangout has limited audience of 10 people, so what we can do is to
 go 'on air' and do a youtube stream of it. We can paste its link into the
 official etherpad of the session, so it will be spread to targeted audience
 without any limitations then.

 Do you think it will be all possible? Is the internet connection powerful
 enough to handle this load?

 Or do you have other ideas how to make sessions more available?

 Cheers
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Andrew Laski

On 11/01/13 at 10:16am, John Garbutt wrote:

Its intentional. Cells is there to split up your nodes into more
manageable chunks.


I don't think you mean to say that there's intentionally a performance 
issue.  But yes there are performance issues with the filter scheduler.  

Because I work on a deployment that uses cells to partition the workload 
I haven't seen them myself, but there are plenty of reports from others 
who have encountered them.  And it's easy to run some back of the napkin 
calculations like was done below and see that scheduling will require a 
lot of resources if there's no partitioning.




There are quite a few design summit sessions on looking into
alternative approaches to our current scheduler.

While I would love a single scheduler to make everyone happy, I am
thinking we might end up with several scheduler, each with slightly
different properties, and you pick one depending on what you want to
do with your cloud.


+1.  We have the ability to drop in different schedulers right now, but 
there's only one really useful scheduler in the tree.  There has been 
talk of making a more performant scheduler which schedules in a 'good 
enough' fashion through some approximation algorithm.  I would love to 
see that get introduced as another scheduler and not as a rework of the 
filter scheduler.  I suppose the chance scheduler could technically 
count for that, but I'm under the impression that it isn't used beyond 
testing.




John

On 31 October 2013 22:39, Jiang, Yunhong yunhong.ji...@intel.com wrote:

I noticed several filters (AggregateMultiTenancyIsoaltion, ram_filter, 
type_filter, AggregateInstanceExtraSpecsFilter) have DB access in the 
host_passes(). Some will even access for each invocation.

Just curios if this is considered a performance issue? With a 10k nodes, 60 VM 
per node, and 3 hours VM life cycle cloud, it will have more than 1 million DB 
access per second. Not a small number IMHO.

Thanks
--jyh

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re : welcoming new committers

2013-11-01 Thread Romain Hardouin
 Indeed :) Can you share with us briefly what you found interesting in
 the sessions of Upstream University? Which ones did you go to?

Upsteam University acted like a starter to me. I attended the september session 
in Paris.

I'm interested in OpenStack since Essex but I did not dare submit my first 
patch.
Folks at UU make this happen.
During the two days group session we have learned the basics of Open Source and 
how to interact with such a community.
Then, weekly assessments and phone calls allow us to stay motivated and thus 
not to give up. We are given valuable advices to get our fixes accepted.

Guys at Upstream University are really cool and proficient.


 Well, I think that anybody's opinion matters and you're not a new
 OpenStack developer anymore. You have your own experience and your
 reviews may definitely help somebody even newer than you to get his/her
 patch refined before the more experienced developers get to it. I'm sure
 your comments, even without a vote, would help. Chime in then :)

Thanks for your kind words. 
I'm going to start reviewing. I take note of what Jeremy Stanley said:
OpenStack does not lack developers... it lacks reviewers.

-Romain ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] changing old migrations is verboten

2013-11-01 Thread Ben Nemec

On 2013-11-01 06:28, Daniel P. Berrange wrote:

On Fri, Nov 01, 2013 at 07:20:19AM -0400, Sean Dague wrote:

On 11/01/2013 06:27 AM, John Garbutt wrote:
On 31 October 2013 16:57, Johannes Erdfelt johan...@erdfelt.com wrote:
On Thu, Oct 31, 2013, Sean Dague s...@dague.net wrote:
So there is a series of patches starting with -
https://review.openstack.org/#/c/53417/ that go back and radically
change existing migration files.

I initially agreed with the -2, but actually I like this change, but I
will get to that later.

This is really a no-no, unless there is a critical bug fix that
absolutely requires it. Changing past migrations should be
considered with the same level of weight as an N-2 backport, only
done when there is huge upside to the change.

I've -2ed the first 2 patches in the series, though that review
applies to all of them (I figured a mailing list thread was probably
more useful than -2ing everything in the series).

There needs to be really solid discussion about the trade offs here
before contemplating something as dangerous as this.

The most important thing for DB migrations is that they remain
functionality identical.

+1

We really should never change what the migrations functionally do.

Admittedly we should ensure we don't change something by accident,
so I agree with minimizing the changes in those files also.

Historically we have allowed many changes to DB migrations that kept
them functionally identical to how they were before.

Looking through the commit history, here's a sampling of changes:

- _ was no longer monkey patched, necessitating a new import added
- fix bugs causing testing problems
- change copyright headers
- remove unused code (creating logger, imports, etc)
- fix bugs causing the migrations to fail to function (on PostgreSQL,
   downgrade bugs, etc)
- style changes (removing use of locals(), whitespace, etc)
- make migrations faster
- add comments to clarify code
- improve compatibility with newer versions of SQLAlchemy

The reviews you're referencing seem to fall into what we have
historically allowed.

+1 The patch is really just refactoring.

I think we should move to the more descriptive field names, so we
remove the risk of cut and paste errors in string length, etc.

Now, if we don't go back and add those into the migrations, people
will just cut and paste examples from the old migrations, and
everything will start getting quite confusing. I would love to say
that wasn't true, be we know that's how it goes.

It's trading one source of bugs for another. I'd love to say we can
have our cake and eat it to, but we really can't. And I very much
fall on the side of getting migrations is hard, updating past
migrations without ever forking the universe is really really hard,
and we've completely screwed it up in the past, so lets not do it.

That said, I do agree there needs to be a higher burden of proof that
the change being made is functionally identical to before.

+1 and Rick said he has inspected the MySQL and PostgreSQL tables to
ensure he didn't change anything.

So I'm going to call a straight BS on that. In at least one of the
cases columns were shortened from 256 to 255. In the average case
would that be an issue? Probably not. However that's a truncation,
and a completely working system at 256 length for those fields could
go to non working with data truncation. Data loads matter. And we
can't assume anything about the data in those fields that isn't
enforced by the DB schema itself.

I've watched us mess this up multiple times in the past when we were
*sure* it was good. And as has been noticed recently, one of the
collapses changes a fk name (by accident), which broke upgrades to
havana for a whole class of people.

So I think that we really should put a moratorium on touching past
migrations until there is some sort of automatic validation that the
new and old path are the same, with sufficiently complicated data
that pushes the limits of those fields.


Agreed, automated validation should be a mandatory pre-requisite
for this kind of change. I've done enough mechanical no-op
refactoring changes in the past to know that humans always screw
something up - we're just not good at identifying the needle in a
haystack. For data model upgrade changes this risk is too serious to
ignore.


FWIW, there's work going on in Oslo around validating that our 
migrations result in a schema that matches the intended model.  IIUC, 
that should help catch a lot of errors in changes for both old and new 
migrations.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [savanna] [trove] Place for software configuration

2013-11-01 Thread Alexander Kuznetsov
Jay. Do you have a plan to add a Savanna (type: Heat::Savanna) and Trove
 (type: Heat::Trove)  providers to the HOT DSL?


On Thu, Oct 31, 2013 at 10:33 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 10/31/2013 01:51 PM, Alexander Kuznetsov wrote:

 Hi Heat, Savanna and Trove teams,

 All this projects have common part related to software configuration
 management.  For creation,  an environment  user should specify a
 hardware parameter for vms:  choose flavor, decide use cinder or not,
 configure networks for virtual machines, choose topology for hole
 deployment. Next step is linking of software parameters with hardware
 specification. From the end user point of view, existence of three
 different places and three different ways (HEAT Hot DSL, Trove
 clustering API and Savanna Hadoop templates) for software configuration
 is not convenient, especially if user want to create an environment
 simultaneously involving components from Savanna, Heat and Trove.

 I can suggest two approaches to overcome this situations:

 Common library in oslo. This approach allows a deep domain specific
 customization. The user will still have 3 places with same UI where user
 should perform configuration actions.

 Heat or some other component for software configuration management. This
 approach is the best for end users. In feature possible will be some
 limitation on deep domain specific customization for configuration
 management.


 I think this would be my preference.

 In other words, describe and orchestrate a Hadoop or Database setup using
 HOT templates and using Heat as the orchestration engine.

 Best,
 -jay

  Heat, Savanna and Trove teams can you comment these ideas, what approach
 are the best?

 Alexander Kuznetsov.


 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __**_
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.**org OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-devhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [savanna] [trove] Place for software configuration

2013-11-01 Thread Alexander Kuznetsov
On Fri, Nov 1, 2013 at 12:39 AM, Clint Byrum cl...@fewbar.com wrote:

 Excerpts from Alexander Kuznetsov's message of 2013-10-31 10:51:54 -0700:
  Hi Heat, Savanna and Trove teams,
 
  All this projects have common part related to software configuration
  management.  For creation,  an environment  user should specify a
 hardware
  parameter for vms:  choose flavor, decide use cinder or not, configure
  networks for virtual machines, choose topology for hole deployment. Next
  step is linking of software parameters with hardware specification. From
  the end user point of view, existence of three different places and three
  different ways (HEAT Hot DSL, Trove clustering API and Savanna Hadoop
  templates) for software configuration is not convenient, especially if
 user
  want to create an environment simultaneously involving components from
  Savanna, Heat and Trove.
 

 I'm having a hard time extracting the problem statement. I _think_ that
 the problem is:

 As a user I want to tune my software for my available hardware.

 So what you're saying is, if you select a flavor that has 4GB of RAM
 for your application, you want to also tell your application that it
 can use 3GB of RAM for an in-memory cache. Likewise, if one has asked
 Trove for an 8GB flavor, they will want to tell it to use 6.5GB of RAM
 for InnoDB buffer cache.

 What you'd like to see is one general pattern to express these types
 of things?

Exactly.


  I can suggest two approaches to overcome this situations:
 
  Common library in oslo. This approach allows a deep domain specific
  customization. The user will still have 3 places with same UI where user
  should perform configuration actions.
 
  Heat or some other component for software configuration management. This
  approach is the best for end users. In feature possible will be some
  limitation on deep domain specific customization for configuration
  management.

 Can you maybe be more concrete with your proposed solutions? The lack
 of a clear problem statement combined with these vague solutions has
 thoroughly confused me.


 Sure. I suggest creating a some library or component for standardization
of  software and hardware configuration. It will contain a validation logic
and parameters lists.

Now Trove, Savanna and Heat all have part related to hardware
configuration. For end user, VMs description should not depend on component
where it will be used.

Here is an example of VM description which could be common for Savanna and
Trove:

{
   flavor_id: 42,
   image_id: ”test”,
   volumes: [{
   # extra contains a domain specific parameters.
   # For instance aim for Savanna
   # could be hdfs-dir or mapreduce-dir.
   # For trove: journal-dir or db-dir.
   extra: {
   aim: hdfs-dir
   },
   size: 10GB,
   filesystem: ext3
 },{
   extra: {
 aim: mapreduce-dir
   },
   size: 5GB,
   filesystem: ext3
 }]
networks: [{
   private-network: some-private-net-id,
   public-network: some-public-net-id
 }]


Also, it will be great if this library or component will standardize some
software configuration parameters, like a credential for database or LDAP.
This greatly simplify integration between different components. For example
if user want process data on Hadoop from Cassandra, user should provide a
database location and credentials to Hadoop. If we have some standard for
both Trove and Savanna, it can be done the same way in both components. An
example for Cassandra could look like that:


{
  type: cassandra,
  host: example.com,
  port: 1234,
  credentials: {
 user: ”test”,
 password: ”123”
  }
}


This parameters names and schema should be the same for different
components referencing a Cassandra server.

 
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova XML serialization bug 1223358 moving discussion here to get more people involved

2013-11-01 Thread Rosa, Andrea (HP Cloud Services)
Hi all

Long story short: a long time ago I raised a bug [1] and I started to work on 
the fix:  GuoHui LIu (the reviewer) and myself had long and useful discussion 
about the right solution for  that but now we are stuck and we need some more 
opinions to find a proper one.

And now the long story:
When we have an instance booted from a volume and we don't specify the image 
details in the boot command the XML serialization of instance details fails and 
the API call (like nova show) returns a 500 error.
The problem is that the image properties is mandatory to serialize but the xml 
serializer can't manage properly an empty value.
In particular in the xmlutil we a have the class Selector which selects datum 
within a specific object, that class is designed to deal with missing data in 
the object but not to deal with an empty object.
At this moment to deal with missing data the logic used in the method is to 
catch KeyError or IndexError exceptions:
try:
obj = obj[elem]
except (KeyError, IndexError):
if do_raise:
raise KeyError(elem)

My simple fix was to following the same logic and add a new exception to get 
caught TypeError which is raised when the passed object is empty (it is an 
empty string).

One of the main complain was that this approach tends to add some business 
logic in the xmlutil and also adding a new exception could hide some potential 
errors.
I can't disagree but at the same time I say that I am following the same logic 
that we already have there.

We are now stuck, because the long-term solution probably is to rethink the XML 
serialization process to allow more flexibility but that doesn't seem an easy 
task and I really want to get this bug fixed.

What do you think?
Anyone is available to have a look and give us an opinion? 

Please @Llu feel free to add your comments or any missing points.

PS: I am not an expert of the nova xmlutil, could be that I am missing some 
easy points if so, please let me know.

Thanks 
--
Andrea Rosa

[1] https://bugs.launchpad.net/nova/+bug/1223358
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-01 Thread Jaromir Coufal


On 2013/01/11 13:48, Davanum Srinivas wrote:

Jarda,

Here are some lessons learned from a friend who is on a team that used
Google Hangout - http://drbacchus.com/google-hangout-lessons-learned

-- dims


Awesome, thanks a lot for this cheat sheet :)

I have done couple of them, having quite a good experience with this 
concept, so I think it might work well for our needs


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Shawn Hartsock


- Original Message -
 From: Yunhong Jiang yunhong.ji...@intel.com
 To: openstack-dev@lists.openstack.org
 Sent: Thursday, October 31, 2013 6:39:29 PM
 Subject: [openstack-dev] [nova][scheduler]The database access in the  
 scheduler filters
 
 I noticed several filters (AggregateMultiTenancyIsoaltion, ram_filter,
 type_filter, AggregateInstanceExtraSpecsFilter) have DB access in the
 host_passes(). Some will even access for each invocation.
 
 Just curios if this is considered a performance issue? With a 10k nodes, 60
 VM per node, and 3 hours VM life cycle cloud, it will have more than 1
 million DB access per second. Not a small number IMHO.
 
 Thanks
 --jyh
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

Sorry if I'm dumb, but please try to explain things to me. I don't think I 
follow...

10k nodes, 60 VM per node... is 600k VM in the whole cloud. A 3 hour life cycle 
for a VM means every hour 1/3 the nodes turn over so 200k VM  are 
created/deleted per hour ... divide by 60 for ... 3,333.333 per minute or ... 
divide by 60 for ... 55.5 VM creations/deletions per second ...

... did I do that math right? So where's the million DB accesses per second 
come from? Are the rules fired for every VM on every access so that 600k VM + 1 
new VM means the rules fire 600k + 1 times? What? Sorry... really confused.

# Shawn Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][ceilometer] Network Notification plugin broken. Ceilometer bug #1243292

2013-11-01 Thread Sphoorti Joglekar
Team,

The tenant_id, subnet, network are missing from some of the network
notification payloads, and this looks like it happened rather recently.
The ceilometer  expects them to be present on each of the notifications.
 Is this Neutron's expected behavior? Would changing the dictionary
reference from [key] to .get(key, default_value) help?

Here is the link to the bug

https://bugs.launchpad.net/ceilometer/+bug/1243292
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Releases of this week

2013-11-01 Thread Roman Podoliaka
Hi all,

This week I've been doing releases of all projects, which belong to
TripleO program. Here are release notes you might be interested in:

os-collect-config  - 0.1.5 (was 0.1.4):
- default polling interval was reduced to 30 seconds
- requirements were updated to use the new iso8601 version
fixing important bugs

diskimage-builder - 0.0.9 (was 0.0.8)
 - added support for bad Fedora image mirrors (retry the
request once on 404)
 - removed dependency on dracut-network from fedora element
 - fixed the bug with removing of lost+found dir if it's not found

tripleo-image-elements  - 0.1.0 (was 0.0.4)
 - switched to tftpd-hpa on Fedora and Ubuntu
 - made it possible to disable file injection in Nova
 - switched seed vm to Neutron native PXE
 - added Fedora support to apache2 element
 - fixed processing of routes in init-neutron-ovs
 - fixed Heat watch server url key name in seed vm metadata

tripleo-heat-templates - 0.1.0 (was 0.0.1)
 - disabled Nova Baremetal file injection (undercloud)
 - made LaunchConfiguration resources mergeable
 - made neutron public interface configurable (overcloud)
 - made it possible to set public interface IP (overcloud)
 - allowed making the public interface a VLAN (overcloud)
 - added a wait condition for signalling that overcloud is ready
 - added metadata for Nova floating-ip extension
 - added tuskar API service configuration
 - hid AdminToken in Heat templates
 - added Ironic service configuration

 tuskar - 0.0.2 (was 0.0.1)
 - made it possible to pass Glance image id
 - fixed the bug with duplicated Resource Class names

 tuskar-ui - 0.0.2 (was 0.0.1)
  - resource class creation form no longer ignores the image selection
  - separated flavors creation step
  - fail gracefully on node detail page when no overcloud
  - added validation of MAC addresses and CIDR values
  - stopped appending Resource Class name to Resource Class flavors
  - fixed JS warnings when $ is not available
  - fixed links and naming in Readme
  - various code and test fixes (pep8, refactoring)

  python-tuskarclient - 0.0.2 (was 0.0.1)
  - fixed processing of 301 response code

  os-apply-config and os-refresh-config haven't had new commits
since the last release

This also means that:
1. We are now releasing all the projects we have.
2. *tuskar* projects have got PyPi entries.

Last but not least.

I'd like to say a big thank you to Chris Jones who taught me 'Release
Management 101' and provided patches to openstack/infra-config to make
all our projects 'releasable'; Robert Collins for his advice on
version numbering; Clark Boylan and Jeremy Stanley for landing of
Gerrit ACL patches and debugging PyPi uploads issues; Radomir
Dopieralski and Tomas Sedovic for landing a quick fix to tuskar-ui.

Thank you all guys, you've helped me a lot!

Roman

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Horizon PTL candidacy

2013-11-01 Thread Lyle, David
I would like to announce my candidacy for the Horizon PTL.

I have been developing, extending and contributing to Horizon through the 
Grizzly and Havana releases serving as a core team member in the Havana cycle.  
And most recently in Icehouse, being interim PTL.

At the Havana OpenStack Summit, I led a design session on Keystone v3 and 
multiple region support in Horizon. During the Havana cycle, I contributed 
multi-region support, significant parts of Keystone v3 support, and the role 
based access control engine including enforcement for identity. While 
consistently reviewing changes:

http://russellbryant.net/openstack-stats/horizon-reviewers-30.txt
http://russellbryant.net/openstack-stats/horizon-reviewers-180.txt


Over the course of the Havana cycle, we've seen significant growth in the 
number of contributors to Horizon.  I would like to see that growth continue 
and also see significant growth in the number of active reviewers.  As PTL, I 
would enable continued growth by being highly engaged and easily accessible 
while encouraging greater communication and collaboration across our strong 
community of contributors.

In Icehouse, I would like to see updates to the overall user experience in 
Horizon to allow for greater extensibility and better overall user experience.
This includes:

-Improved navigation control -- 
The current navigation implementation is not very extensible, and could 
stand some overall improvement regarding context and organization.  

-Use of role based access control to reduce code redundancy -- 
We currently have nearly duplicate panels for the end user and 
administrators, by incorporating role base checks on actions and the 
methods that load data into the page, we can reduce the duplication and 
have common panels that work for both classes of user.

-Workflows that better and more intuitively support common use cases -- 
The current UI requires a fair amount of domain knowledge for an end user 
to be able to create entities in the stack.  I want to work to include a 
general wizard widget into the horizon library that can be leveraged to walk
users through common use cases like spinning up their first instance.

-Continued support for projects coming out of incubation and active support for 
new features in existing projects 

-Greater extensibility -- 
Horizon really has two purposes, one a library to build other UIs from, 
and two a working reference UI for OpenStack. While a great deal of effort 
is spent achieving the second purpose, the former still has great value 
and we need to continue to make the horizon library work in other UI 
implementations.

-Views for administrators that work in for large installations -- 
The panels as currently designed provide no meaningful way to filter data.
There are a high number of API calls per panel and API calls that are not 
filtering, but rather calling for every item of a given type.

For those who aren't familiar with me from IRC, Horizon meetings or other 
interactions, I work for HP on Horizon where we use Horizon to manage the HP 
Public Cloud. Previously, I've held architect, technical lead and project 
manager positions.

Thanks,
David Lyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] [Tempest] [Ironic] Testing Ironic

2013-11-01 Thread Devananda van der Veen
On Thu, Oct 31, 2013 at 7:00 PM, Monty Taylor mord...@inaugust.com wrote:


 On 10/31/2013 04:38 PM, Devananda van der Veen wrote:
  Hi all,
 
  Ironic has reached a point where it is capable of being added to
  integration tests and the gate pipeline. Does that mean we /should/ add
  it? I think so, and I'd like to know what others think, but let me be
  more specific about what I'm talking about.

 Yes, we should - with caveats.

  At this point, it is possible for devstack to start ironic-api and
  ironic-conductor services, register them with keystone, init the db,
  etc. Then, tempest [1] can perform basic actions like create, update,
  delete records. Testing the hardware drivers will come later, which is
  where we'll see integration tests with glance, neutron, etc, become more
  important.
 
  Roman has been working on enabling these tests [2] and I'd like to nudge
  the reviewers of the tempest and infra teams to take a look at his
  patches. At the very least, I'd like to make folks aware of this work so
  we can all discuss it at the summit.

 Basically, we've been wanting for a while to see gate integration before
 something graduates incubation - but there is a chicken and egg sitch
 where we can't REALLY add it to the gate until after graduation.


Ah, I see.



 So, I think what we want to do is add the tests to tempest/devstack, and
 then make a job that only runs on ironic changes that runs that. That
 will show us that you're ready once graduation review comes up.


That works for me. Getting tempest/devstack tests going for Ironic
is what I really want, and yea, that should happen before we're in all the
other project's gate checks.



 I'll look at the infra patches (it's been slow with jeblair out)


Thanks!




  [1]
  Tempest tests for Ironic API: https://review.openstack.org/#/c/48109
 
  [2]
  Pre-cache Ironic to slaves: https://review.openstack.org/#/c/54569
  Enable Ironic in devstack-gate: https://review.openstack.org/#/c/53899
  Enable tempest tests in the experimental pipeline:
  https://review.openstack.org/#/c/53917
 
 
  Thanks!
  Devananda
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Russell Bryant
On 11/01/2013 09:09 AM, Andrew Laski wrote:
 On 11/01/13 at 10:16am, John Garbutt wrote:
 Its intentional. Cells is there to split up your nodes into more
 manageable chunks.
 
 I don't think you mean to say that there's intentionally a performance
 issue.  But yes there are performance issues with the filter scheduler. 
 Because I work on a deployment that uses cells to partition the workload
 I haven't seen them myself, but there are plenty of reports from others
 who have encountered them.  And it's easy to run some back of the napkin
 calculations like was done below and see that scheduling will require a
 lot of resources if there's no partitioning.
 

 There are quite a few design summit sessions on looking into
 alternative approaches to our current scheduler.

 While I would love a single scheduler to make everyone happy, I am
 thinking we might end up with several scheduler, each with slightly
 different properties, and you pick one depending on what you want to
 do with your cloud.
 
 +1.  We have the ability to drop in different schedulers right now, but
 there's only one really useful scheduler in the tree.  There has been
 talk of making a more performant scheduler which schedules in a 'good
 enough' fashion through some approximation algorithm.  I would love to
 see that get introduced as another scheduler and not as a rework of the
 filter scheduler.  I suppose the chance scheduler could technically
 count for that, but I'm under the impression that it isn't used beyond
 testing.

Agreed.

There's a lot of discussion happening in two different directions, it
seems.  One group is very interested in improving the scheduler's
ability to make the best decision possible using various policies.
Another group is concerned with massive scale and is willing to accept
good enough scheduling to get there.

I think the filter scheduler is pretty reasonable for the best possible
decision approach today.  There's some stuff that could perform better.
 There's more policy knobs that could be added.  There's the cross
service issue to figure out ... but it's not bad.

I'm very interested in a new good enough scheduler.  I liked the idea
of running a bunch of schedulers that each only look at a subset of your
infrastructure and pick something that's good enough.  I'm interested to
hear other ideas in the session we have on this topic (rethinking
scheduler design).

Of course, you get a lot of the massive scale benefits by going to
cells, too.  If cells is our answer here, I really want to see more
people stepping up to help with the cells code.  There are still some
feature gaps to fill.  We should also be looking at the road to getting
back to only having one way to deploy nova (cells).  Having both cells
vs non-cells options really isn't ideal long term.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] changing old migrations is verboten

2013-11-01 Thread Johannes Erdfelt
On Fri, Nov 01, 2013, Sean Dague s...@dague.net wrote:
 It's trading one source of bugs for another. I'd love to say we can
 have our cake and eat it to, but we really can't. And I very much
 fall on the side of getting migrations is hard, updating past
 migrations without ever forking the universe is really really hard,
 and we've completely screwed it up in the past, so lets not do it.

I understand what you're saying, but if the result of it is that we're
never going to touch old migrations, we're going to slowly build
technical debt.

I don't think it's an acceptable solution to throw up our hands and deal
with the pain.

We need to come up with a solution that allows us to stay agile while
also ensuring we don't break things.

 So I'm going to call a straight BS on that. In at least one of the
 cases columns were shortened from 256 to 255. In the average case
 would that be an issue? Probably not. However that's a truncation,
 and a completely working system at 256 length for those fields could
 go to non working with data truncation. Data loads matter. And we
 can't assume anything about the data in those fields that isn't
 enforced by the DB schema itself.

I assume this is the review you're talking about?

https://review.openstack.org/#/c/53471/3

FWIW, the old migrations *are* functionally identical. Those strings are
still 256 characters long.

It's the new migration that truncates data.

That said, I'm not sure I see the value in this particular cleanup
considering the fact it does truncate data (even if it's unlikely to
cause problems).

 I've watched us mess this up multiple times in the past when we were
 *sure* it was good. And as has been noticed recently, one of the
 collapses changes a fk name (by accident), which broke upgrades to
 havana for a whole class of people.
 
 So I think that we really should put a moratorium on touching past
 migrations until there is some sort of automatic validation that the
 new and old path are the same, with sufficiently complicated data
 that pushes the limits of those fields.
 
 Manual inspection by one person that their environment looks fine
 has never been a sufficient threshold for merging code.

I can get completely on board with that.

Does that mean you're softening your stance that migrations should never
be touched?

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Building an application a top

2013-11-01 Thread Paul Belanger
Greetings all,

I am in the process of building out an application on top of horizon,
but my application actually has nothing to do with OpenStack.

You see, I am planning on using Keystone as my identity service for
the applications and would also like to add a GUI for user management.
 I know openstack_dashboard provides this, however right now, the
requirement require I run keystone plus nova and glance.

I quickly hacked openstack_dashboard to simply provide 'Identity' as
the only admin panel for openstack_dashboard and it worked pretty
well.  However, the more I played around and read the documentation,
it seemed the right approach was to build atop of horizon and not
openstack_dashboard.

So, for the moment, I have embedded openstack_dashboard admin
(identity) and settings dashboards with my django project, but to be
honest I don't really want to do this or take on the maintenance
burdon to maintain them.

So, what I am wondering is how do you guys / gals see people who want
to leverage keystone (or another openstack project) and build
something atop horizon?  Would you be open to breaking out the
identity dashboard into its own package that people could extend upon?
Or is there another approach you recommend taking?

-- 
Paul Belanger | PolyBeacon, Inc.
Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
Github: https://github.com/pabelanger | Twitter: https://twitter.com/pabelanger

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Russell Bryant
On 11/01/2013 11:14 AM, Clayton Coleman wrote:
 - Original Message -
 Noorul Islam K M noo...@noorul.com writes:

 Adrian Otto adrian.o...@rackspace.com writes:

 Team,

 Our StackForge code repo is open, so you may begin submitting code for
 review. For those new to the process, I made a will page with links to
 the repo and information about how to contribute:

 https://wiki.openstack.org/wiki/Solum/Contributing


 1. .gitreview file is missing, so I submitted a patch

 https://review.openstack.org/#/c/54877

 
 Once all the gitreview stuff is cleaned up I was going to do some purely 
 mechanical additions.
 
 I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:
 
 solum/db/api.py
   manager abstraction for db calls
 solum/db/sqlalchemy/api.py
   sqlalchemy implementation

I wouldn't just copy this layout, personally.

We should look at getting some of the nova object work into
oslo-incubator.  It provides a nice object model to abstract away the
database API details.  You really don't want to be returning sqlalchemy
models to the rest of the code base if you can get away with it.

If we were starting the Nova database integration work from scractch
today, I'm not sure we'd have db.api and db.sqlalchemy.api.  It seems
like it would make more sense to add the db.api equivalents to our
objects, and sub-class them to add specific database support.

 I was also going to throw in migrate as a dependency and put in the glue code 
 for that based on common use from ironic/trove/heat.  That'll pull in a few 
 openstack common and config settings.  Finally, was going to add a 
 solum-dbsync command a la the aforementioned projects.  No schema will be 
 added.

I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
we (OpenStack) have had to inherit it.  For new projects, you should use
alembic.  That's actively developed and maintained.  Other OpenStack
projects are either already using it, or making plans to move to it.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration (was: Stackforge Repo Ready)

2013-11-01 Thread Clayton Coleman
- Original Message -
 
 Once all the gitreview stuff is cleaned up I was going to do some purely
 mechanical additions.
 
 I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:
 
 solum/db/api.py
   manager abstraction for db calls
 solum/db/sqlalchemy/api.py
   sqlalchemy implementation
 
 I was also going to throw in migrate as a dependency and put in the glue code
 for that based on common use from ironic/trove/heat.  That'll pull in a few
 openstack common and config settings.  Finally, was going to add a
 solum-dbsync command a la the aforementioned projects.  No schema will be
 added.
 
 Objections?
 

I was blindly assuming we want to pull in eventlet support, with the implicit 
understanding that we will be doing some form of timeslicing and async io bound 
waiting in the API... but would like to hear others weigh in before I add the 
monkey_patch and stub code around script startup.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] Official Programs tag?

2013-11-01 Thread Thierry Carrez
Ilya Shakhat wrote:
 Currently the grouping is two-layer: the higher is to split between
 openstack-hosted and stackforge-hosted projects, the lower is to split
 core, incubation, docs, etc. The grouping may be not so accurate since
 it needs to comply with the latest changes in integrated / incubated
 projects (bug https://bugs.launchpad.net/stackalytics/+bug/1244485). 
 
 I see there's a need to have stats for official projects (more correctly
 to say projects belonging to official
 programs https://wiki.openstack.org/wiki/Programs), and separate that
 stats from other openstack-hosted projects. 
 So will it work to have the grouping like that?
  - Official OpenStack Projects
 - core 
 - integrated 
 - incubated
  - OpenStack complimentary projects (e.g. infra projects like jeepyb, gitdm)
  - Stackforge projects (everything from github/stackforge)

I think the following objective groupings make sense:

Official
* Integrated (= commonly-released, server) projects (Nova, Swift... up
to Trove)
* Incubated (Marconi, Savanna...)
* All projects from all official programs (includes client bindings like
python-novaclient, openstack-infra/*, tempest, tripleO  etc.)

Stackforge
* All stackforge

Core at this point probably doesn't make sense, since depending on who
you ask, or what reference text you look at, you'd get a different list.
Just use integrated instead.

OpenStack complimentary projects is also subjective and should be
dropped in favor of the All projects from all official programs category.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman


- Original Message -
 On 11/01/2013 11:14 AM, Clayton Coleman wrote:
  - Original Message -
  Noorul Islam K M noo...@noorul.com writes:
 
  Adrian Otto adrian.o...@rackspace.com writes:
 
  Team,
 
  Our StackForge code repo is open, so you may begin submitting code for
  review. For those new to the process, I made a will page with links to
  the repo and information about how to contribute:
 
  https://wiki.openstack.org/wiki/Solum/Contributing
 
 
  1. .gitreview file is missing, so I submitted a patch
 
  https://review.openstack.org/#/c/54877
 
  
  Once all the gitreview stuff is cleaned up I was going to do some purely
  mechanical additions.
  
  I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:
  
  solum/db/api.py
manager abstraction for db calls
  solum/db/sqlalchemy/api.py
sqlalchemy implementation
 
 I wouldn't just copy this layout, personally.
 
 We should look at getting some of the nova object work into
 oslo-incubator.  It provides a nice object model to abstract away the
 database API details.  You really don't want to be returning sqlalchemy
 models to the rest of the code base if you can get away with it.
 
 If we were starting the Nova database integration work from scractch
 today, I'm not sure we'd have db.api and db.sqlalchemy.api.  It seems
 like it would make more sense to add the db.api equivalents to our
 objects, and sub-class them to add specific database support.

Is what you're referring to different than what I see in master:

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L420
  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L43

?  My assumption was that the db.api manager would be handling that 
translation, and we would define db.api as returning object models, vs 
sqlalchemy models (even if initially they looked similar).  Would the 
abstraction for each model be split into different classes then (so that there 
would be one implementation per model, per backend)?  What about cross model 
operations?

If I describe the model used in other projects as:

  manager class
translates retrieval requests into impl-specific objects
saves impl-specific objects
handles coarse multi object calls

  API
#fetch_somethings(filter)
#save_something

would you say that your model is:

  abstract model class
has methods that call out to an implementation (itself a subclass?) and 
returns subclasses of the abstract class

  Something
#fetch(filter)
#save

SqlAlchemySomething
  #fetch(filter)
  #save

?

 
  I was also going to throw in migrate as a dependency and put in the glue
  code for that based on common use from ironic/trove/heat.  That'll pull in
  a few openstack common and config settings.  Finally, was going to add a
  solum-dbsync command a la the aforementioned projects.  No schema will be
  added.
 
 I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
 we (OpenStack) have had to inherit it.  For new projects, you should use
 alembic.  That's actively developed and maintained.  Other OpenStack
 projects are either already using it, or making plans to move to it.
 

Thanks, did not see it in the projects I was looking at, who's the canonical 
example here?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Building an application a top

2013-11-01 Thread Lyle, David
This is a topic near to my heart.  I think it's a logical move to have a 
separate dashboard for identity.  At the summit, we have a session planned on 
discussing the overall Information Architecture of Horizon 
http://icehousedesignsummit.sched.org/event/3b3b3430fe23da9ffed6a15eda50fd25

One part of that discussion will be to look at where Identity fits in.

But a second item needs to happen to make this feasible.  Horizon needs to 
adopt a more extensible layout so that we can accommodate more than 2 
dashboards without running into size constraints.  This is also a planned 
discussion at the design summit.

So no definitive plans yet, but stay tuned.

David

 -Original Message-
 From: Paul Belanger [mailto:paul.belan...@polybeacon.com]
 Sent: Friday, November 01, 2013 10:27 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [horizon] Building an application a top
 
 Greetings all,
 
 I am in the process of building out an application on top of horizon, but my
 application actually has nothing to do with OpenStack.
 
 You see, I am planning on using Keystone as my identity service for the
 applications and would also like to add a GUI for user management.
  I know openstack_dashboard provides this, however right now, the
 requirement require I run keystone plus nova and glance.
 
 I quickly hacked openstack_dashboard to simply provide 'Identity' as the
 only admin panel for openstack_dashboard and it worked pretty well.
 However, the more I played around and read the documentation, it seemed
 the right approach was to build atop of horizon and not
 openstack_dashboard.
 
 So, for the moment, I have embedded openstack_dashboard admin
 (identity) and settings dashboards with my django project, but to be honest
 I don't really want to do this or take on the maintenance burdon to maintain
 them.
 
 So, what I am wondering is how do you guys / gals see people who want to
 leverage keystone (or another openstack project) and build something atop
 horizon?  Would you be open to breaking out the identity dashboard into its
 own package that people could extend upon?
 Or is there another approach you recommend taking?
 
 --
 Paul Belanger | PolyBeacon, Inc.
 Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
 Github: https://github.com/pabelanger | Twitter:
 https://twitter.com/pabelanger
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] #openstack IRC meeting UTC 1300 Friday on PCI pass-through network support

2013-11-01 Thread Henry Gessau
On Fri, Nov 01, at 8:45 am, Robert Li (baoli) ba...@cisco.com wrote:

 Let's have a meeting with the #openstack IRC channel at UTC 1300 Friday. 

Thanks for holding this meeting Robert, I think everyone found it useful.

Minutes and log:
http://eavesdrop.openstack.org/meetings/pci_passthrough_network/2013/pci_passthrough_network.2013-11-01-13.30.html
http://eavesdrop.openstack.org/meetings/pci_passthrough_network/2013/pci_passthrough_network.2013-11-01-13.30.txt
http://eavesdrop.openstack.org/meetings/pci_passthrough_network/2013/pci_passthrough_network.2013-11-01-13.30.log.html

 Based on our email discussions so far, I listed the topics in below:
-- physical network binding

I spoke to Robert after the meeting and he clarified to me what he was
trying to say during the meeting, that I did not understand at the time.

In fact, he already suggested in this thread in an earlier message[1].

By reading the PCI domain/bus/slot/function fields, we could use some
combination of these in the PCI alias to distinguish different network
connections. (Some wildcarding support may need to be added.)

This might be sufficiently abstract to be not considered as network
information in nova.

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-October/017729.html

-- 
Henry

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Jay Pipes

On 11/01/2013 12:33 PM, Clayton Coleman wrote:

- Original Message -


Once all the gitreview stuff is cleaned up I was going to do some purely
mechanical additions.

I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:

solum/db/api.py
   manager abstraction for db calls
solum/db/sqlalchemy/api.py
   sqlalchemy implementation

I was also going to throw in migrate as a dependency and put in the glue code
for that based on common use from ironic/trove/heat.  That'll pull in a few
openstack common and config settings.  Finally, was going to add a
solum-dbsync command a la the aforementioned projects.  No schema will be
added.

Objections?



I was blindly assuming we want to pull in eventlet support, with the implicit 
understanding that we will be doing some form of timeslicing and async io bound 
waiting in the API... but would like to hear others weigh in before I add the 
monkey_patch and stub code around script startup.


I'm not so sure that bringing in eventlet should be done by default. It 
adds complexity and if most/all of the API calls will be doing some call 
to a native C library like libmysql that blocks, I'm not sure there is 
going to be much benefit to using eventlet versus multiplexing the 
servers using full OS processes -- either manually like some of the 
projects do with the workers=N configuration and forking, or using more 
traditional multiplexing solutions like running many mod_wsgi or uwsgi 
workers inside Apache or nginx.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Building an application a top

2013-11-01 Thread Paul Belanger

On 13-11-01 12:53 PM, Lyle, David wrote:

This is a topic near to my heart.  I think it's a logical move to have a 
separate dashboard for identity.  At the summit, we have a session planned on 
discussing the overall Information Architecture of Horizon 
http://icehousedesignsummit.sched.org/event/3b3b3430fe23da9ffed6a15eda50fd25

One part of that discussion will be to look at where Identity fits in.

But a second item needs to happen to make this feasible.  Horizon needs to 
adopt a more extensible layout so that we can accommodate more than 2 
dashboards without running into size constraints.  This is also a planned 
discussion at the design summit.

So no definitive plans yet, but stay tuned.

Okay, so this is some what good news there is existing discussion 
happening around this.


That's basically how I plan to move forward. For the moment I'll simply 
extract the identity dashboard from openstack_dashboard and embed it 
into my app.  Hopefully replacing it when upstream decides to break out 
identiy into its own dashboard.


--
Paul Belanger | PolyBeacon, Inc.
Jabber: paul.belan...@polybeacon.com | IRC: pabelanger (Freenode)
Github: https://github.com/pabelanger | Twitter: 
https://twitter.com/pabelanger


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Joshua Harlow
I think there is a summit topic about what to do about a good 'oslo.db'
(not sure if it got scheduled?)

I'd always recommend reconsidering just copying what nova/cinder and a few
others have for there db structure.

I don't think that has turned out so well in the long term (a 6000+ line
file is not so good).

As for a structure that might be better, in taskflow I followed more of
how ceilometer does there db api. It might work for u.

- https://github.com/openstack/ceilometer/tree/master/ceilometer/storage
- 
https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
kends

I also have examples of alembic usage in taskflow, since I also didn't
want to use sqlalchemy-migrate for the same reasons russell mentioned.

- 
https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
kends/sqlalchemy

Feel free to bug me about questions.

On 11/1/13 9:46 AM, Clayton Coleman ccole...@redhat.com wrote:



- Original Message -
 On 11/01/2013 11:14 AM, Clayton Coleman wrote:
  - Original Message -
  Noorul Islam K M noo...@noorul.com writes:
 
  Adrian Otto adrian.o...@rackspace.com writes:
 
  Team,
 
  Our StackForge code repo is open, so you may begin submitting code
for
  review. For those new to the process, I made a will page with
links to
  the repo and information about how to contribute:
 
  https://wiki.openstack.org/wiki/Solum/Contributing
 
 
  1. .gitreview file is missing, so I submitted a patch
 
  https://review.openstack.org/#/c/54877
 
  
  Once all the gitreview stuff is cleaned up I was going to do some
purely
  mechanical additions.
  
  I heard a few +1 for sqlalchemy with the standard OpenStack
abstraction:
  
  solum/db/api.py
manager abstraction for db calls
  solum/db/sqlalchemy/api.py
sqlalchemy implementation
 
 I wouldn't just copy this layout, personally.
 
 We should look at getting some of the nova object work into
 oslo-incubator.  It provides a nice object model to abstract away the
 database API details.  You really don't want to be returning sqlalchemy
 models to the rest of the code base if you can get away with it.
 
 If we were starting the Nova database integration work from scractch
 today, I'm not sure we'd have db.api and db.sqlalchemy.api.  It seems
 like it would make more sense to add the db.api equivalents to our
 objects, and sub-class them to add specific database support.

Is what you're referring to different than what I see in master:

  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L4
20
  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py
#L43

?  My assumption was that the db.api manager would be handling that
translation, and we would define db.api as returning object models, vs
sqlalchemy models (even if initially they looked similar).  Would the
abstraction for each model be split into different classes then (so that
there would be one implementation per model, per backend)?  What about
cross model operations?

If I describe the model used in other projects as:

  manager class
translates retrieval requests into impl-specific objects
saves impl-specific objects
handles coarse multi object calls

  API
#fetch_somethings(filter)
#save_something

would you say that your model is:

  abstract model class
has methods that call out to an implementation (itself a subclass?)
and returns subclasses of the abstract class

  Something
#fetch(filter)
#save

SqlAlchemySomething
  #fetch(filter)
  #save

?

 
  I was also going to throw in migrate as a dependency and put in the
glue
  code for that based on common use from ironic/trove/heat.  That'll
pull in
  a few openstack common and config settings.  Finally, was going to
add a
  solum-dbsync command a la the aforementioned projects.  No schema
will be
  added.
 
 I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
 we (OpenStack) have had to inherit it.  For new projects, you should use
 alembic.  That's actively developed and maintained.  Other OpenStack
 projects are either already using it, or making plans to move to it.
 

Thanks, did not see it in the projects I was looking at, who's the
canonical example here?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Jiang, Yunhong
As Shawn Hartsock pointed out in the reply, I made a stupid error in the 
calculation. It's in fact 55 access per second, not that big number I 
calculated. 
I thought I graduated from elementary school but seems I'm wrong. Really sorry 
for the stupid error.

--jyh

 -Original Message-
 From: Russell Bryant [mailto:rbry...@redhat.com]
 Sent: Friday, November 01, 2013 9:18 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [nova][scheduler]The database access in the
 scheduler filters
 
 On 11/01/2013 09:09 AM, Andrew Laski wrote:
  On 11/01/13 at 10:16am, John Garbutt wrote:
  Its intentional. Cells is there to split up your nodes into more
  manageable chunks.
 
  I don't think you mean to say that there's intentionally a performance
  issue.  But yes there are performance issues with the filter scheduler.
  Because I work on a deployment that uses cells to partition the workload
  I haven't seen them myself, but there are plenty of reports from others
  who have encountered them.  And it's easy to run some back of the
 napkin
  calculations like was done below and see that scheduling will require a
  lot of resources if there's no partitioning.
 
 
  There are quite a few design summit sessions on looking into
  alternative approaches to our current scheduler.
 
  While I would love a single scheduler to make everyone happy, I am
  thinking we might end up with several scheduler, each with slightly
  different properties, and you pick one depending on what you want to
  do with your cloud.
 
  +1.  We have the ability to drop in different schedulers right now, but
  there's only one really useful scheduler in the tree.  There has been
  talk of making a more performant scheduler which schedules in a 'good
  enough' fashion through some approximation algorithm.  I would love
 to
  see that get introduced as another scheduler and not as a rework of the
  filter scheduler.  I suppose the chance scheduler could technically
  count for that, but I'm under the impression that it isn't used beyond
  testing.
 
 Agreed.
 
 There's a lot of discussion happening in two different directions, it
 seems.  One group is very interested in improving the scheduler's
 ability to make the best decision possible using various policies.
 Another group is concerned with massive scale and is willing to accept
 good enough scheduling to get there.
 
 I think the filter scheduler is pretty reasonable for the best possible
 decision approach today.  There's some stuff that could perform better.
  There's more policy knobs that could be added.  There's the cross
 service issue to figure out ... but it's not bad.
 
 I'm very interested in a new good enough scheduler.  I liked the idea
 of running a bunch of schedulers that each only look at a subset of your
 infrastructure and pick something that's good enough.  I'm interested to
 hear other ideas in the session we have on this topic (rethinking
 scheduler design).
 
 Of course, you get a lot of the massive scale benefits by going to
 cells, too.  If cells is our answer here, I really want to see more
 people stepping up to help with the cells code.  There are still some
 feature gaps to fill.  We should also be looking at the road to getting
 back to only having one way to deploy nova (cells).  Having both cells
 vs non-cells options really isn't ideal long term.
 
 --
 Russell Bryant
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] changing old migrations is verboten

2013-11-01 Thread Sean Dague
On 11/01/2013 12:20 PM, Johannes Erdfelt wrote:
snip
 I've watched us mess this up multiple times in the past when we were
 *sure* it was good. And as has been noticed recently, one of the
 collapses changes a fk name (by accident), which broke upgrades to
 havana for a whole class of people.

 So I think that we really should put a moratorium on touching past
 migrations until there is some sort of automatic validation that the
 new and old path are the same, with sufficiently complicated data
 that pushes the limits of those fields.

 Manual inspection by one person that their environment looks fine
 has never been a sufficient threshold for merging code.
 
 I can get completely on board with that.
 
 Does that mean you're softening your stance that migrations should never
 be touched?

If we have a way to automatically validate the new final results vs. the
old final results, including carrying interesting edge condition data
through it, yes, absolutely.

Many things move from verboten to acceptable with sufficient test
harness.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Jiang, Yunhong
Yes, you are right .. :(

 -Original Message-
 From: Shawn Hartsock [mailto:hartso...@vmware.com]
 Sent: Friday, November 01, 2013 8:20 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][scheduler]The database access in the
 scheduler filters
 
 
 
 - Original Message -
  From: Yunhong Jiang yunhong.ji...@intel.com
  To: openstack-dev@lists.openstack.org
  Sent: Thursday, October 31, 2013 6:39:29 PM
  Subject: [openstack-dev] [nova][scheduler]The database access in the
   scheduler filters
 
  I noticed several filters (AggregateMultiTenancyIsoaltion, ram_filter,
  type_filter, AggregateInstanceExtraSpecsFilter) have DB access in the
  host_passes(). Some will even access for each invocation.
 
  Just curios if this is considered a performance issue? With a 10k nodes,
 60
  VM per node, and 3 hours VM life cycle cloud, it will have more than 1
  million DB access per second. Not a small number IMHO.
 
  Thanks
  --jyh
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 Sorry if I'm dumb, but please try to explain things to me. I don't think I
 follow...
 
 10k nodes, 60 VM per node... is 600k VM in the whole cloud. A 3 hour life
 cycle for a VM means every hour 1/3 the nodes turn over so 200k VM
 are created/deleted per hour ... divide by 60 for ... 3,333.333 per minute
 or ... divide by 60 for ... 55.5 VM creations/deletions per second ...
 
 ... did I do that math right? So where's the million DB accesses per second
 come from? Are the rules fired for every VM on every access so that 600k
 VM + 1 new VM means the rules fire 600k + 1 times? What? Sorry... really
 confused.
 
 # Shawn Hartsock
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Stackforge Repo Ready

2013-11-01 Thread Noorul Islam Kamal Malmiyoda
On Fri, Nov 1, 2013 at 10:52 AM, Noorul Islam K M noo...@noorul.com wrote:
 Adrian Otto adrian.o...@rackspace.com writes:

 Team,

 Our StackForge code repo is open, so you may begin submitting code for 
 review. For those new to the process, I made a will page with links to the 
 repo and information about how to contribute:

 https://wiki.openstack.org/wiki/Solum/Contributing


 1. .gitreview file is missing, so I submitted a patch

 https://review.openstack.org/#/c/54877

 This patch also contains update to README to include relevant project
 information.

 2. My review request got rejected by Jenkins. A re-base against [1] is
not helping.


Clark Boylan (clarkb) from infra fixed this manually.

Thank you!

- Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman


- Original Message -
 On 11/01/2013 12:33 PM, Clayton Coleman wrote:
  - Original Message -
 
  Once all the gitreview stuff is cleaned up I was going to do some purely
  mechanical additions.
 
  I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:
 
  solum/db/api.py
 manager abstraction for db calls
  solum/db/sqlalchemy/api.py
 sqlalchemy implementation
 
  I was also going to throw in migrate as a dependency and put in the glue
  code
  for that based on common use from ironic/trove/heat.  That'll pull in a
  few
  openstack common and config settings.  Finally, was going to add a
  solum-dbsync command a la the aforementioned projects.  No schema will be
  added.
 
  Objections?
 
 
  I was blindly assuming we want to pull in eventlet support, with the
  implicit understanding that we will be doing some form of timeslicing and
  async io bound waiting in the API... but would like to hear others weigh
  in before I add the monkey_patch and stub code around script startup.
 
 I'm not so sure that bringing in eventlet should be done by default. It
 adds complexity and if most/all of the API calls will be doing some call
 to a native C library like libmysql that blocks, I'm not sure there is
 going to be much benefit to using eventlet versus multiplexing the
 servers using full OS processes -- either manually like some of the
 projects do with the workers=N configuration and forking, or using more
 traditional multiplexing solutions like running many mod_wsgi or uwsgi
 workers inside Apache or nginx.
 

What about callouts to heat/keystone APIs?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] Official Programs tag?

2013-11-01 Thread Sean Dague
On 11/01/2013 12:42 PM, Thierry Carrez wrote:
snip
 I think the following objective groupings make sense:
 
 Official
 * Integrated (= commonly-released, server) projects (Nova, Swift... up
 to Trove)
 * Incubated (Marconi, Savanna...)
 * All projects from all official programs (includes client bindings like
 python-novaclient, openstack-infra/*, tempest, tripleO  etc.)

Agreed. Though realistically I still want Official Programs + Integrated
to be separable from incubation. Incubation is by definition a different
class of things. Yes, it's something the TC let move closer to
integration, but it's not a declared official part of OpenStack (yet).

The other important thing is this mapping is temporal, which as far as I
can tell stackalytics doesn't support for tagging.

If we're going to reference stackalytics more often for the project this
needs to be done right, and not even have the appearance that the
groupings were created just to make certain organizations float to the
top. So lets get these right. I'd love to permanently retire gitdm, but
until stackalytics actually counts things the way the project is
organized, it's not possible.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Logging Blueprint Approval

2013-11-01 Thread Adrian Otto
Team,

We have a blueprint for logging architecture within Solum, which is an 
important feature to aid in debugging at runtime:

https://blueprints.launchpad.net/solum/+spec/logging

The team voted for a lightweight governance style, and because we are planning 
to skip our upcoming meeting doe to the ODS event, I plan to approver this 
blueprint in the absence of any objections here on the ML, or in #solum today. 
Please let me know your thoughts, as we want to be acting on an approved 
blueprint before posting an implementation.

Thanks,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-01 Thread Stefano Maffulli
On 11/01/2013 05:33 AM, Jaromir Coufal wrote:
 I was wondering, since there is a lot of people who cannot attend Design
 Sessions, if we can help them to be present at least in some way.

We tried in the past to setup systems to enable remote participation in
a generic way (give a URL per each session and hope somebody joins it
from remote) but never had enough return to justify the effort put in
the production.

I'd be interested to learn about your experiments: are you thinking of
some specific set of people that you need to get involved remotely or
you just want to provide the remote URL for anybody that wants to join?

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [savanna] [trove] Place for software configuration

2013-11-01 Thread Jay Pipes

On 11/01/2013 10:29 AM, Alexander Kuznetsov wrote:

Jay. Do you have a plan to add a Savanna (type: Heat::Savanna) and Trove
  (type: Heat::Trove)  providers to the HOT DSL?


Hi Alexander,

No, but I'd be interested in working on them, particularly a Savannah 
provider for Heat. I can start on it once the instance group API 
extension is in Nova and wired into Heat.


Best,
-jay


On Thu, Oct 31, 2013 at 10:33 PM, Jay Pipes jaypi...@gmail.com
mailto:jaypi...@gmail.com wrote:

On 10/31/2013 01:51 PM, Alexander Kuznetsov wrote:

Hi Heat, Savanna and Trove teams,

All this projects have common part related to software configuration
management.  For creation,  an environment  user should specify a
hardware parameter for vms:  choose flavor, decide use cinder or
not,
configure networks for virtual machines, choose topology for hole
deployment. Next step is linking of software parameters with
hardware
specification. From the end user point of view, existence of three
different places and three different ways (HEAT Hot DSL, Trove
clustering API and Savanna Hadoop templates) for software
configuration
is not convenient, especially if user want to create an environment
simultaneously involving components from Savanna, Heat and Trove.

I can suggest two approaches to overcome this situations:

Common library in oslo. This approach allows a deep domain specific
customization. The user will still have 3 places with same UI
where user
should perform configuration actions.

Heat or some other component for software configuration
management. This
approach is the best for end users. In feature possible will be some
limitation on deep domain specific customization for configuration
management.


I think this would be my preference.

In other words, describe and orchestrate a Hadoop or Database setup
using HOT templates and using Heat as the orchestration engine.

Best,
-jay

Heat, Savanna and Trove teams can you comment these ideas, what
approach
are the best?

Alexander Kuznetsov.


_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] No team meeting on Monday

2013-11-01 Thread Kurt Griffiths
No meeting on Monday due to the summit.

Cheers,

---
@kgriffs
Kurt Griffiths

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Jay Pipes

On 11/01/2013 01:39 PM, Clayton Coleman wrote:



- Original Message -

On 11/01/2013 12:33 PM, Clayton Coleman wrote:

- Original Message -


Once all the gitreview stuff is cleaned up I was going to do some purely
mechanical additions.

I heard a few +1 for sqlalchemy with the standard OpenStack abstraction:

solum/db/api.py
manager abstraction for db calls
solum/db/sqlalchemy/api.py
sqlalchemy implementation

I was also going to throw in migrate as a dependency and put in the glue
code
for that based on common use from ironic/trove/heat.  That'll pull in a
few
openstack common and config settings.  Finally, was going to add a
solum-dbsync command a la the aforementioned projects.  No schema will be
added.

Objections?



I was blindly assuming we want to pull in eventlet support, with the
implicit understanding that we will be doing some form of timeslicing and
async io bound waiting in the API... but would like to hear others weigh
in before I add the monkey_patch and stub code around script startup.


I'm not so sure that bringing in eventlet should be done by default. It
adds complexity and if most/all of the API calls will be doing some call
to a native C library like libmysql that blocks, I'm not sure there is
going to be much benefit to using eventlet versus multiplexing the
servers using full OS processes -- either manually like some of the
projects do with the workers=N configuration and forking, or using more
traditional multiplexing solutions like running many mod_wsgi or uwsgi
workers inside Apache or nginx.



What about callouts to heat/keystone APIs?


Sure, it's possible to do that with eventlet. It's also possible to do 
that with a queueing system. For example, the API server could send an 
RPC message to a queue and a separate process could work on executing 
the individual tasks associated with a particular application build-pack.


The design of Zuul [1], especially with the relatively recent addition 
of Gearman and the nodepool orchestrator [2] that the openstack-infra 
folks wrote would, IMO, be a worthy place to look for inspiration, since 
Solum essentially is handling a similar problem domain -- distributed 
processing of ordered tasks.


Best,
-jay

[1] https://github.com/openstack-infra/zuul
[2] https://github.com/openstack-infra/nodepool

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Logging Blueprint Approval

2013-11-01 Thread Clayton Coleman
- Original Message -
 Team,
 
 We have a blueprint for logging architecture within Solum, which is an
 important feature to aid in debugging at runtime:
 
 https://blueprints.launchpad.net/solum/+spec/logging
 
 The team voted for a lightweight governance style, and because we are
 planning to skip our upcoming meeting doe to the ODS event, I plan to
 approver this blueprint in the absence of any objections here on the ML, or
 in #solum today. Please let me know your thoughts, as we want to be acting
 on an approved blueprint before posting an implementation.
 
 Thanks,
 

Comment I already replied to Jay in IRC about:

Be sure to consider the potential impact of threading/eventlet behavior on the 
logging infrastructure so that attributes are bound correctly to the 
appropriate caller.

Other than that the concept of tying user info to logging more broadly is +1 
from me.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [qa] gate-tempest-devstack-vm-neutron-pg-isolated failing 45% of the time

2013-11-01 Thread Sean Dague
Over the last 7 days -
http://logstash.openstack.org/#eyJzZWFyY2giOiIobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIEFORCBidWlsZF9uYW1lOmdhdGUtdGVtcGVzdC1kZXZzdGFjay12bS1uZXV0cm9uLXBnLWlzb2xhdGVkIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzgzMzI4OTEyNzEzfQ==

There is a 45% Failure rate of the
gate-tempest-devstack-vm-neutron-pg-isolated in neutron. Salvadore and I
were looking at logs the other day, and there are clearly SQL errors
being through because some of the Neutron queries are mysql specific
now. That may or may not be the issue, but it's at least suspicious.

This is only currently affecting the neutron project jobs, because we
removed this from tempest yesterday as it was preventing us from merging
any non related code.

But 45% is a really high race failure rate, it would be really good if
the neutron team could prioritize addressing this. I realize it's summit
week, so things are slow, but please ensure this comes up as part of the
overall discussion.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Chris Friesen

On 11/01/2013 11:42 AM, Jiang, Yunhong wrote:

Shawn, yes, there is 56 VM access every second, and for each VM
access, the scheduler will invoke filter for each host, that means,
for each VM access, the filter function will be invoked 10k times. So
56 * 10k = 560k, yes, half of 1M, but still big number.



I'm fairly new to openstack so I may have missed earlier discussions, 
but has anyone looked at building a scheduler filter that would use 
database queries over sets of hosts rather rather than looping over each 
host and doing the logic in python?  Seems like that would be a lot more 
efficient...


Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [qa] gate-tempest-devstack-vm-neutron-pg-isolated failing 45% of the time

2013-11-01 Thread Salvatore Orlando
Hi Sean,

I looked further yesterday and nailed down an issue which cause a spike of
failures due to a patch merged on thursday early morning (GMT), for which a
patch was pushed to gerrit.
I then handed off to Armando Migliaccio, who has already pushed a few
patches to solve these issues. (which are apparently more than one).

https://review.openstack.org/#/c/54752/
https://review.openstack.org/#/c/54850/

there are also other patches related to postgres from Armando, but I'll
leave to him to comment on whether they're related or not.

Salvatore


On 1 November 2013 18:07, Sean Dague s...@dague.net wrote:

 Over the last 7 days -

 http://logstash.openstack.org/#eyJzZWFyY2giOiIobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIEFORCBidWlsZF9uYW1lOmdhdGUtdGVtcGVzdC1kZXZzdGFjay12bS1uZXV0cm9uLXBnLWlzb2xhdGVkIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzgzMzI4OTEyNzEzfQ==

 There is a 45% Failure rate of the
 gate-tempest-devstack-vm-neutron-pg-isolated in neutron. Salvadore and I
 were looking at logs the other day, and there are clearly SQL errors
 being through because some of the Neutron queries are mysql specific
 now. That may or may not be the issue, but it's at least suspicious.

 This is only currently affecting the neutron project jobs, because we
 removed this from tempest yesterday as it was preventing us from merging
 any non related code.

 But 45% is a really high race failure rate, it would be really good if
 the neutron team could prioritize addressing this. I realize it's summit
 week, so things are slow, but please ensure this comes up as part of the
 overall discussion.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Joshua Harlow
I think there has been, and I think there will be a good design summit
session for this.

http://icehousedesignsummit.sched.org/event/cde73dadfd67eaae5bf98b90ba7de07
3#.UnPwKiRQ3mw

I think what u have suggested could be a way to do it, as all databases do
is set intersections and unions in the end anyway ;)

And set unions and intersections is nearly synonymous for filtering ;)
Some of the filters though likely would fall into stored procedure land.

On 11/1/13 11:10 AM, Chris Friesen chris.frie...@windriver.com wrote:

On 11/01/2013 11:42 AM, Jiang, Yunhong wrote:
 Shawn, yes, there is 56 VM access every second, and for each VM
 access, the scheduler will invoke filter for each host, that means,
 for each VM access, the filter function will be invoked 10k times. So
 56 * 10k = 560k, yes, half of 1M, but still big number.


I'm fairly new to openstack so I may have missed earlier discussions,
but has anyone looked at building a scheduler filter that would use
database queries over sets of hosts rather rather than looping over each
host and doing the logic in python?  Seems like that would be a lot more
efficient...

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Jiang, Yunhong
Aha, right after replied Harsock's mail, I realized I'm correct still. Glad 
that I did graduated from the school :)

--jyh

 -Original Message-
 From: Jiang, Yunhong [mailto:yunhong.ji...@intel.com]
 Sent: Friday, November 01, 2013 10:32 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova][scheduler]The database access in the
 scheduler filters
 
 As Shawn Hartsock pointed out in the reply, I made a stupid error in the
 calculation. It's in fact 55 access per second, not that big number I
 calculated.
 I thought I graduated from elementary school but seems I'm wrong. Really
 sorry for the stupid error.
 
 --jyh
 
  -Original Message-
  From: Russell Bryant [mailto:rbry...@redhat.com]
  Sent: Friday, November 01, 2013 9:18 AM
  To: openstack-dev@lists.openstack.org
  Subject: Re: [openstack-dev] [nova][scheduler]The database access in
 the
  scheduler filters
 
  On 11/01/2013 09:09 AM, Andrew Laski wrote:
   On 11/01/13 at 10:16am, John Garbutt wrote:
   Its intentional. Cells is there to split up your nodes into more
   manageable chunks.
  
   I don't think you mean to say that there's intentionally a performance
   issue.  But yes there are performance issues with the filter scheduler.
   Because I work on a deployment that uses cells to partition the
 workload
   I haven't seen them myself, but there are plenty of reports from others
   who have encountered them.  And it's easy to run some back of the
  napkin
   calculations like was done below and see that scheduling will require a
   lot of resources if there's no partitioning.
  
  
   There are quite a few design summit sessions on looking into
   alternative approaches to our current scheduler.
  
   While I would love a single scheduler to make everyone happy, I am
   thinking we might end up with several scheduler, each with slightly
   different properties, and you pick one depending on what you want
 to
   do with your cloud.
  
   +1.  We have the ability to drop in different schedulers right now, but
   there's only one really useful scheduler in the tree.  There has been
   talk of making a more performant scheduler which schedules in a
 'good
   enough' fashion through some approximation algorithm.  I would love
  to
   see that get introduced as another scheduler and not as a rework of
 the
   filter scheduler.  I suppose the chance scheduler could technically
   count for that, but I'm under the impression that it isn't used beyond
   testing.
 
  Agreed.
 
  There's a lot of discussion happening in two different directions, it
  seems.  One group is very interested in improving the scheduler's
  ability to make the best decision possible using various policies.
  Another group is concerned with massive scale and is willing to accept
  good enough scheduling to get there.
 
  I think the filter scheduler is pretty reasonable for the best possible
  decision approach today.  There's some stuff that could perform better.
   There's more policy knobs that could be added.  There's the cross
  service issue to figure out ... but it's not bad.
 
  I'm very interested in a new good enough scheduler.  I liked the idea
  of running a bunch of schedulers that each only look at a subset of your
  infrastructure and pick something that's good enough.  I'm interested to
  hear other ideas in the session we have on this topic (rethinking
  scheduler design).
 
  Of course, you get a lot of the massive scale benefits by going to
  cells, too.  If cells is our answer here, I really want to see more
  people stepping up to help with the cells code.  There are still some
  feature gaps to fill.  We should also be looking at the road to getting
  back to only having one way to deploy nova (cells).  Having both cells
  vs non-cells options really isn't ideal long term.
 
  --
  Russell Bryant
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman


- Original Message -
  I was blindly assuming we want to pull in eventlet support, with the
  implicit understanding that we will be doing some form of timeslicing and
  async io bound waiting in the API... but would like to hear others weigh
  in before I add the monkey_patch and stub code around script startup.
 
  I'm not so sure that bringing in eventlet should be done by default. It
  adds complexity and if most/all of the API calls will be doing some call
  to a native C library like libmysql that blocks, I'm not sure there is
  going to be much benefit to using eventlet versus multiplexing the
  servers using full OS processes -- either manually like some of the
  projects do with the workers=N configuration and forking, or using more
  traditional multiplexing solutions like running many mod_wsgi or uwsgi
  workers inside Apache or nginx.
 
 
  What about callouts to heat/keystone APIs?
 
 Sure, it's possible to do that with eventlet. It's also possible to do
 that with a queueing system. For example, the API server could send an
 RPC message to a queue and a separate process could work on executing
 the individual tasks associated with a particular application build-pack.

I guess I was asking because event patterns are generally more efficient for 
memory than multiprocess, assuming that the underlying codebase isn't fighting 
the event system at every step.  Are your concerns with eventlet based on that 
mismatch (bugs, problems with eventlet across the various projects and 
libraries that OpenStack uses) or more that you believe we should start, at the 
very beginning, with the pattern of building everything as a distributed 
ordered task flow since we know at least some of our interactions are 
asynchronous?  There are at least a few network IO operations that will be 
synchronous to our flows - while they are not likely to be a large percentage 
of the API time, they may definitely block threads for a period of time.

 
 The design of Zuul [1], especially with the relatively recent addition
 of Gearman and the nodepool orchestrator [2] that the openstack-infra
 folks wrote would, IMO, be a worthy place to look for inspiration, since
 Solum essentially is handling a similar problem domain -- distributed
 processing of ordered tasks.
 
 Best,
 -jay

Familiar with Gearman, will look through nodepool/Zuul.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman

- Original Message -
 I think there is a summit topic about what to do about a good 'oslo.db'
 (not sure if it got scheduled?)

Will look.

 
 I'd always recommend reconsidering just copying what nova/cinder and a few
 others have for there db structure.
 
 I don't think that has turned out so well in the long term (a 6000+ line
 file is not so good).
 
 As for a structure that might be better, in taskflow I followed more of
 how ceilometer does there db api. It might work for u.
 
 - https://github.com/openstack/ceilometer/tree/master/ceilometer/storage
 -

The Connection / Model object paradigm in Ceilometer was what I was assuming 
was recommended and was where mentially I was starting (it's similar but not 
identical to trove, ironic, and heat).  The ceilometer model is what I would 
describe as a resource manager class (Connection) that hides implementation (by 
mapping Sqlalchemy to the Model* objects).  So storage/base.py | 
storage/models.py define a rough domain model.  Russell, is that what you're 
advocating against (because of the size of the eventual resource manager class)?

Here's a couple of concrete storage interaction patterns

  simple application/component/sensor persistence with clean validation back to 
REST consumers
traditional crud, probably 3-8 resources over time will follow this pattern
best done via object model type interactions and then a direct persist 
operation

  elaborate a plan description for the application (yaml/json/etc) into the 
object model
will need to retrieve specific sets of info from the object model
typically one way
may potentially involve asynchronous operations spawned from the initial 
request to retrieve more information

  translate the plan/object model into a HEAT template
will need to retrieve specific sets of info from the object model
typically one way

  create/update a HEAT stack based on changes
likely will set the stack id into the object model
might return within milliseconds or seconds

  provision source code repositories
might return within milliseconds or minutes

  provision DNS
this can take from within milliseconds to seconds, and DNS is likely only 
visible to an API consumer after minutes.

  trigger build flows
this may take milliseconds to initiate, but minutes to complete

The more complex operations are likely separate pluggable service 
implementations (read: abstracted) that want to call back into the object model 
in a simple way, possibly via methods exposed specifically for those use cases.

I *suspect* that Solum will never have the complexity Nova does in persistence 
model, but that we'll end up with at around 20 tables in the first 2 years.  I 
would expect API surface area to be slightly larger than some projects, but not 
equivalent to keystone/nova by any means.

 https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
 kends
 
 I also have examples of alembic usage in taskflow, since I also didn't
 want to use sqlalchemy-migrate for the same reasons russell mentioned.
 
 -
 https://github.com/stackforge/taskflow/tree/master/taskflow/persistence/bac
 kends/sqlalchemy
 
 Feel free to bug me about questions.

Thanks

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [qa] gate-tempest-devstack-vm-neutron-pg-isolated failing 45% of the time

2013-11-01 Thread Armando Migliaccio
I think/hope I nailed it down. Once more patch coming right up!


On Fri, Nov 1, 2013 at 11:15 AM, Salvatore Orlando sorla...@nicira.comwrote:

 Hi Sean,

 I looked further yesterday and nailed down an issue which cause a spike of
 failures due to a patch merged on thursday early morning (GMT), for which a
 patch was pushed to gerrit.
 I then handed off to Armando Migliaccio, who has already pushed a few
 patches to solve these issues. (which are apparently more than one).

 https://review.openstack.org/#/c/54752/
 https://review.openstack.org/#/c/54850/

 there are also other patches related to postgres from Armando, but I'll
 leave to him to comment on whether they're related or not.

 Salvatore


 On 1 November 2013 18:07, Sean Dague s...@dague.net wrote:

 Over the last 7 days -

 http://logstash.openstack.org/#eyJzZWFyY2giOiIobWVzc2FnZTpcIkZpbmlzaGVkOiBTVUNDRVNTXCIgT1IgbWVzc2FnZTpcIkZpbmlzaGVkOiBGQUlMVVJFXCIpIEFORCBidWlsZF9uYW1lOmdhdGUtdGVtcGVzdC1kZXZzdGFjay12bS1uZXV0cm9uLXBnLWlzb2xhdGVkIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzgzMzI4OTEyNzEzfQ==

 There is a 45% Failure rate of the
 gate-tempest-devstack-vm-neutron-pg-isolated in neutron. Salvadore and I
 were looking at logs the other day, and there are clearly SQL errors
 being through because some of the Neutron queries are mysql specific
 now. That may or may not be the issue, but it's at least suspicious.

 This is only currently affecting the neutron project jobs, because we
 removed this from tempest yesterday as it was preventing us from merging
 any non related code.

 But 45% is a really high race failure rate, it would be really good if
 the neutron team could prioritize addressing this. I realize it's summit
 week, so things are slow, but please ensure this comes up as part of the
 overall discussion.

 -Sean

 --
 Sean Dague
 http://dague.net


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Do we need to clean up resource_id after deletion?

2013-11-01 Thread Christopher Armstrong
Vijendar and I are trying to figure out if we need to set the resource_id
of a resource to None when it's being deleted.

This is done in a few resources, but not everywhere. To me it seems either

a) redundant, since the resource is going to be deleted anyway (thus
deleting the row in the DB that has the resource_id column)
b) actively harmful to useful debuggability, since if the resource is
soft-deleted, you'll not be able to find out what physical resource it
represented before it's cleaned up.

Is there some specific reason we should be calling resource_id_set(None) in
a check_delete_complete method?

-- 
IRC: radix
Christopher Armstrong
Rackspace
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to newer full projects from what used to be part of nova

2013-11-01 Thread Devananda van der Veen
On Thu, Oct 31, 2013 at 8:02 PM, Dean Troyer dtro...@gmail.com wrote:


 Also, FWIW, I don't see another one of these situations coming anytime
 soon.  All of the new project activity is around new services/features.


Actually, anyone deploying nova with the baremetal driver will face a
similar split when Ironic is included in the release. I'm targeting
Icehouse, but of course, it's up to the TC when Ironic graduates.

This should have a smaller impact than either the neutron or cinder splits,
both of which were in widespread use, but I expect we'll see more usage of
nova-baremetal crop up now that Havana is released.


-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-01 Thread Liz Blanchard

On Nov 1, 2013, at 1:42 PM, Stefano Maffulli stef...@openstack.org wrote:

 On 11/01/2013 05:33 AM, Jaromir Coufal wrote:
 I was wondering, since there is a lot of people who cannot attend Design
 Sessions, if we can help them to be present at least in some way.
 
 We tried in the past to setup systems to enable remote participation in
 a generic way (give a URL per each session and hope somebody joins it
 from remote) but never had enough return to justify the effort put in
 the production.
 
 I'd be interested to learn about your experiments: are you thinking of
 some specific set of people that you need to get involved remotely or
 you just want to provide the remote URL for anybody that wants to join?
 
I won't be able to attend the HK summit, but I'm going to be trying to follow 
along with the etherpads for the Horizon sessions. Jarda and I were chatting 
about how I could be more involved via Google Hangout for some of the UX design 
discussions we are looking to have. It could be just me that wants to attend 
remotely, though :) Maybe more people would be more interested in this now that 
the summits have grown larger and have gone global?

Thanks for any thoughts on what may or may not work in this space,
Liz

 /stef
 
 -- 
 Ask and answer questions on https://ask.openstack.org
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Shawn Hartsock
Thanks: TIL.

The filter invocation per host is the bit I was forgetting. 

I'm assuming that the facts about the hosts don't change several times a second 
so if you held the facts in RAM and then asserted the rules against those facts 
allowing for age-out/invalidation based on incoming updates then the whole 
system will run faster. I remember a thread on using dogpile/memoization for 
this kind of thing.

# Shawn Hartsock

- Original Message -
 From: Yunhong Jiang yunhong.ji...@intel.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, November 1, 2013 1:42:03 PM
 Subject: Re: [openstack-dev] [nova][scheduler]The database access in  
 the scheduler filters
 
 Shawn, yes, there is 56 VM access every second, and for each VM access, the
 scheduler will invoke filter for each host, that means, for each VM access,
 the filter function will be invoked 10k times.
 So 56 * 10k = 560k, yes, half of 1M, but still big number.
 
 --jyh
 
  -Original Message-
  From: Shawn Hartsock [mailto:hartso...@vmware.com]
  Sent: Friday, November 01, 2013 8:20 AM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [nova][scheduler]The database access in the
  scheduler filters
  
  
  
  - Original Message -
   From: Yunhong Jiang yunhong.ji...@intel.com
   To: openstack-dev@lists.openstack.org
   Sent: Thursday, October 31, 2013 6:39:29 PM
   Subject: [openstack-dev] [nova][scheduler]The database access in the
  scheduler filters
  
   I noticed several filters (AggregateMultiTenancyIsoaltion, ram_filter,
   type_filter, AggregateInstanceExtraSpecsFilter) have DB access in the
   host_passes(). Some will even access for each invocation.
  
   Just curios if this is considered a performance issue? With a 10k nodes,
  60
   VM per node, and 3 hours VM life cycle cloud, it will have more than 1
   million DB access per second. Not a small number IMHO.
  
   Thanks
   --jyh
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
  
  Sorry if I'm dumb, but please try to explain things to me. I don't think I
  follow...
  
  10k nodes, 60 VM per node... is 600k VM in the whole cloud. A 3 hour life
  cycle for a VM means every hour 1/3 the nodes turn over so 200k VM
  are created/deleted per hour ... divide by 60 for ... 3,333.333 per minute
  or ... divide by 60 for ... 55.5 VM creations/deletions per second ...
  
  ... did I do that math right? So where's the million DB accesses per second
  come from? Are the rules fired for every VM on every access so that 600k
  VM + 1 new VM means the rules fire 600k + 1 times? What? Sorry... really
  confused.
  
  # Shawn Hartsock
  
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler]The database access in the scheduler filters

2013-11-01 Thread Shawn Hartsock

Something should probably change.

The fundamental design issue is that we've got a 1:1 relationship between rule 
execution and database fetch. The rules may fire at several orders of magnitude 
different rates of speed from data refreshes in the database. So I'd think you 
would want to decouple the database fetch from the rule assertion.

# Shawn Hartsock

- Original Message -
 From: Chris Friesen chris.frie...@windriver.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, November 1, 2013 2:10:52 PM
 Subject: Re: [openstack-dev] [nova][scheduler]The database access in  the 
 scheduler filters
 
 On 11/01/2013 11:42 AM, Jiang, Yunhong wrote:
  Shawn, yes, there is 56 VM access every second, and for each VM
  access, the scheduler will invoke filter for each host, that means,
  for each VM access, the filter function will be invoked 10k times. So
  56 * 10k = 560k, yes, half of 1M, but still big number.
 
 
 I'm fairly new to openstack so I may have missed earlier discussions,
 but has anyone looked at building a scheduler filter that would use
 database queries over sets of hosts rather rather than looping over each
 host and doing the logic in python?  Seems like that would be a lot more
 efficient...
 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Dan Smith
   https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L420
   
 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L43

This API and these models are what we are trying to avoid exposing to
the rest of nova. By wrapping these in our NovaObject-based structures,
we can bundle versioned data and methods together which is what we need
for cross-version compatibility and parity for the parts of nova that
are not allowed to talk to the database directly.

See the code in nova/objects/* for the implementations. Right now, these
just call into the db_api.py, but eventually we want to move the actual
database implementation into the objects themselves and hopefully
dispense with most or all of the sqlalchemy/* stuff. This also provides
us the ability to use other persistence backends that aren't supported
by sqlalchemy, or that don't behave like it does.

If you're going to be at the summit, come to the objects session on
Thursday where we'll talk about this in more detail. Other projects have
expressed interest in moving the core framework into Oslo so that we're
all doing things in roughly the same way. It would be good to get you
started on the right way early on before you have the migration hassle
we're currently enjoying in Nova :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-Dev][Compass] Announcement of the Compass Deployment project

2013-11-01 Thread Rochelle.Grober
The demo session is:

Wednesday, November 6 * 1:20pm - 1:35pm  in the Demo Theatre

The presentation is:

Thursday, November 7 * 4:30pm - 5:10pm in Sky City Meeting Rm 4 (Marriot)

We are also trying for an unconference session to do some brainstorming with 
interested developers.  And, our schedules should be on the website so you can 
find the team members at the conference. 

Folks to look for:
Shuo Yang
Weidong Shao
Haiying Wang

Thanks,
Rocky Grober

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Friday, November 01, 2013 12:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Fwd: [Openstack-Dev] Announcement of the Compass 
Deployment project

On 1 November 2013 20:41, Rochelle Grober roc...@gmail.com wrote:

 A message from my associate as he wings to the Icehouse OpenStack summit
 (and yes, we're psyched):
 Our project, code named Compass is a Restful API driven deployment platform
 that performs discovery of the physical machines attached to a specified set
 of switches. It then customizes configurations for machines you identify and
 installs the systems and networks to your configuration specs. Besides
 presenting the technical internals and design decisions of Compass  at the
 Icehouse summit, we will also have a  demo session.

Cool - when is it? I'd like to get along.

...
 We look forward to showing the community our project, receiving and
 incorporating, brainstorming what else it could do, and integrating it into
 the OpenStack family .  We are a part of the OpenStack community and want to
 support it both with core participation and with Compass.

I'm /particularly/ interested in the interaction with Neutron and
network modelling - do you use Neutron for the physical switch
interrogation, do you inform Neutron about the topology and so on.

Anyhow, lets make sure we can connect and see where we can collaborate!

Cheers,
Rob



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-01 Thread Jaromir Coufal


On 2013/01/11 18:42, Stefano Maffulli wrote:

On 11/01/2013 05:33 AM, Jaromir Coufal wrote:

I was wondering, since there is a lot of people who cannot attend Design
Sessions, if we can help them to be present at least in some way.

We tried in the past to setup systems to enable remote participation in
a generic way (give a URL per each session and hope somebody joins it
from remote) but never had enough return to justify the effort put in
the production.

I'd be interested to learn about your experiments: are you thinking of
some specific set of people that you need to get involved remotely or
you just want to provide the remote URL for anybody that wants to join?

/stef


I'll try to be more specific.

I started to think about that because there are definitely people who 
are interested in sessions but they cannot attend from various reasons. 
It is also worth to document discussions. I know that we keep etherpads, 
but it can happen that they might miss some details or actually good 
thoughts which were mentioned during conversations.


Since I have quite good experience with using these hangouts on air, I 
wanted to propose it as a general way how to get our sessions more 
accessible. Lot of people are asking how they can get more involved and 
I received very positive feedback for streaming; so I believe that the 
audience is definitely there.


I think what we can do (and it won't take a lot of extra effort) is:
* Setup a Google+ account (if not already existing)
* Before session, log in and start hangout on air: 
https://www.google.com/+/learnmore/hangouts/onair.html

* CopyPaste youtube stream link to official etherpad
* Point webcam to the right spot
* Right before the session press 'Start broadcast'
* People who are interested in the session just follow the link (stream)
* Stop broadcasting in the end of session.

Concerns:
- 10 people for hangout (but no limitation for youtube streaming)
- Stream is delayed about 30 seconds from reality (more difficult 
interaction for QA)
- I am not sure about acoustic conditions, if discussions will be loud 
enough (hope it will be fine).


I would like to try this at least in sessions which I am leading (and 
use my youtube's channel), but I believe that this is very beneficial 
for all projects and sessions. It might be on moderators responsibility 
to take care about that, or it can be globally setup for each hall.


The interaction with remote people might be a bit difficult - delay, 
volume, etc. So the question is, if at this run we will be trying to 
setup something how people can interact remotely or we just do streaming 
and prepare ourselves better for the following summit if we get positive 
feedback.


Possible ways of remote interaction:
- direct hangout participation (key people)
- questions/comments section in etherpad (delay, more difficult to 
follow - might need some assistant for tracking them)


What do you think?
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Migrating to newer full projects from what used to be part of nova

2013-11-01 Thread Dean Troyer
On Fri, Nov 1, 2013 at 1:38 PM, Devananda van der Veen 
devananda@gmail.com wrote:

 Actually, anyone deploying nova with the baremetal driver will face a
 similar split when Ironic is included in the release. I'm targeting
 Icehouse, but of course, it's up to the TC when Ironic graduates.

 This should have a smaller impact than either the neutron or cinder
 splits, both of which were in widespread use, but I expect we'll see more
 usage of nova-baremetal crop up now that Havana is released.


I didn't recall in which release baremetal was first a supported option, is
it only now in Havana?  Is it clear in the docs that this sort of situation
is coming in the next release or two? (And no, I haven't gone to look for
myself, maybe on the plane tomorrow...)

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Design Workshop at SFO

2013-11-01 Thread Roshan Agrawal
Hello, we are locked down on the plan to hold design workshops on Solum at SFO! 
It it now time to confirm your participation, and make travel arrangements. 

Please confirm your attendance by visiting the eventbrite page: 
https://www.eventbrite.com/event/9130831563 . This is important so we get an 
accurate count of attendees.

Workshop dates: Nov 19, 20
Location: Rackspace SFO office (620 Folsom St, San Francisco, CA 94107)
Purpose: working sessions for Solum contributors to discuss design/blueprints.

Meeting Structure
Nov 19 Tuesday 9:00 am - 5 pm
  9:00 - 9:30: check-in
  9:30 - 10:00: introductions, agenda
  10:00 - 5:00: Roundtable workshop, whiteboarding 
  5:30 - 7:00: Happy hour (3rd floor at the Rackspace SFO office)

Nov 20 Wednesday 9:30 am - 3:00 pm : Continue workshops
Workshop Concludes 3 pm Wednesday

Please refer to the etherpad page below for the latest info on the event, and 
to provide input on discussion topics for the workshop.
https://etherpad.openstack.org/p/SolumSFOCommunityWorkshop

Thanks, and look forward to seeing you all at the event.

Thanks!
Roshan Agrawal

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-01 Thread Stefano Maffulli
On Fri 01 Nov 2013 12:40:44 PM PDT, Jaromir Coufal wrote:
 Since I have quite good experience with using these hangouts on air, I
 wanted to propose it as a general way how to get our sessions more
 accessible. Lot of people are asking how they can get more involved
 and I received very positive feedback for streaming; so I believe that
 the audience is definitely there.

Yeah, we had those requests in the past too and we've tried to 
accommodate them in different ways. The feedback at the end was always 
poor and we isolated the root causes not in technology but in social 
behaviour and expectations. The remote audience expected smooth 
almost-real-life presence to the event and that's just not going to 
happen because: things during the design summit get frantic, schedules 
are hard to keep, discipline (always talk in a microphone, always 
identify yourself, don't talk over somebody, keep background noise low, 
etc) is almost impossible, technical glitches and small network 
disruptions are to be expected, timezones are a fact of life. And on 
the other side of the event, I have never heard of anybody taking a 
break from work/life to follow the remote event: people barely pay 
attention while 'multitasking', except the one or two that really need 
to be involved.

This lead us to give up trying for Hong Kong. We searched for 
alternatives (see the infra meeting logs a few months back and my 
personal experiments [1] based on UDS experience) and the only solid 
one was to use a land line to call in the crucial people that *have to* 
be in a session. In the end I and Thierry made a judgement call to drop 
that too because we didn't hear anyone demanding it and setting them up 
in a reliable way for all sessions required a lot of efforts (that in 
the past we felt went wasted anyway).

We decided to let moderators find their preferred way to pull in those 
1-3 people ad-hoc with their preferred methods.


 I think what we can do (and it won't take a lot of extra effort) is:
[...]

I'm happy to help you do that. If it works, good for you and for all 
the people that won't be in HK. The room should have enough bandwidth 
to handle that and there should be a way to connect a laptop to the 
mixer so at least people online will be able to get the audio straight 
from a good source. If you have audio cables, take them with you.

 I would like to try this at least in sessions which I am leading (and
 use my youtube's channel), but I believe that this is very beneficial
 for all projects and sessions. It might be on moderators
 responsibility to take care about that, or it can be globally setup
 for each hall.

Indeed, each moderator can let us know for the summit after HK what 
they would like to have.

/stef


[1] 
http://maffulli.net/2013/03/17/simple-live-audio-streaming-from-openstack-summit-using-raspberrypi/

--
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Michael Still
On Sat, Nov 2, 2013 at 3:30 AM, Russell Bryant rbry...@redhat.com wrote:

 I also would not use migrate.  sqlalchemy-migrate is a dead upstream and
 we (OpenStack) have had to inherit it.  For new projects, you should use
 alembic.  That's actively developed and maintained.  Other OpenStack
 projects are either already using it, or making plans to move to it.

This is something I wanted to dig into at the summit in fact, mostly
because I'm not sure I agree... Sure migrate is now an openstack
project, but so is olso and we're happy to use that. So I don't think
it being abandoned by the original author is a strong argument.

Its not clear to me what alembic gives us that we actually want...
Sure, we could have a non-linear stream of migrations, but we already
do a terrible job of having a simple linear stream. I don't think
adding complexity is going to make the world any better to be honest.

These are the kind of issues I wanted to discuss in the nova db summit
session if people are able to come to that.

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-Dev] [Compass] Announcement of the Compass Deployment project

2013-11-01 Thread Rochelle.Grober
From: Dmitry Mescheryakov [mailto:dmescherya...@mirantis.com]

I've noticed you list Remote install and configure a Hadoop cluster (synergy 
with Savanna?) among possible use cases. Recently there was a discussion about 
Savanna on bare metal provisioning through Nova (see thread [1]). Nobody tested 
that yet, but it was concluded that it should work without any changes in 
Savanna code.

So if Compass could set up baremetal provisioning with Nova, possibly Savanna 
will work on top of that out of the box.


The referenced thread is one of the threads that got us wondering whether 
Compass could be of use here.  If you're going to the summit, you can 
brainstorm with our guys.  Otherwise, we can take up this discussion after the 
summit.  Compass should be able to build baremetal based installs or VM-based 
instances starting from baremetal.  The key is to make sure the Compass design 
and implementation meets the needs/requirements of Savanna and other OpenStack 
projects.

--Rocky

Dmitry

[1] http://lists.openstack.org/pipermail/openstack-dev/2013-October/017438.html

2013/11/1 Robert Collins 
robe...@robertcollins.netmailto:robe...@robertcollins.net
On 1 November 2013 20:41, Rochelle Grober 
roc...@gmail.commailto:roc...@gmail.com wrote:

 A message from my associate as he wings to the Icehouse OpenStack summit
 (and yes, we're psyched):
 Our project, code named Compass is a Restful API driven deployment platform
 that performs discovery of the physical machines attached to a specified set
 of switches. It then customizes configurations for machines you identify and
 installs the systems and networks to your configuration specs. Besides
 presenting the technical internals and design decisions of Compass  at the
 Icehouse summit, we will also have a  demo session.
Cool - when is it? I'd like to get along.

...
 We look forward to showing the community our project, receiving and
 incorporating, brainstorming what else it could do, and integrating it into
 the OpenStack family .  We are a part of the OpenStack community and want to
 support it both with core participation and with Compass.
I'm /particularly/ interested in the interaction with Neutron and
network modelling - do you use Neutron for the physical switch
interrogation, do you inform Neutron about the topology and so on.

Anyhow, lets make sure we can connect and see where we can collaborate!

Cheers,
Rob



--
Robert Collins rbtcoll...@hp.commailto:rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Re : welcoming new committers (was Re: When is it okay for submitters to say 'I don't want to add tests' ?)

2013-11-01 Thread David Kranz

On 10/31/2013 10:36 PM, Jeremy Stanley wrote:

On 2013-10-31 22:45:56 + (+), Romain Hardouin wrote:

Adding a message for new comers is a good idea.
I am a new Horizon contributor, some of my fixes have been merged
(thanks to Upstream University :-) and reviewers of course) but I
still hesitate to do code review. To my mind, it is reserved to
known developpers whose opinion matters...

Not at all. One of the best ways to become known within the
community is to review code and provide good recommendations. Even
something as simple as spotting typographical errors in changes to
user-facing messages and documentation provides value. The more
problems you can find (and ultimately help prevent) in a change, the
faster your reputation will grow.

As has been said many times already, OpenStack does not lack
developers... it lacks reviewers.
Reviewing and contributing unit tests are the developer activities we 
have for addressing quality. I think the issue here is how we as a 
community make sure there is balance between these activities and raw 
feature (and bug) contribution, given that most developers most enjoy 
hacking away, myself included. In a corporate software project, this 
balance would be enforced by one or all of:


1. Slowing down development
2. Providing more qa resources, including requiring developers to write 
unit tests
3. Knowingly accepting quality risk in exchange for some 
business-related gain


As an open source community we cannot do some of these things. But lack 
of reviewers effectively slows down development, and we can strive for 
the scalability of quality that comes from developers writing unit 
tests. My first contribution to swift was rejected until I enhanced the 
test infrastructure even though what I did was similar to other things 
that were not really being tested.


We should be nice about it, and spend a little extra effort in helping 
new contributors get into the swing of writing unit tests, but the 
review gate is the only real mechanism we have for making sure we have 
sufficient coverage to keep the code base maintainable by others in the 
future. I really like Rob's list because it leads down a path of better 
shared understanding of how lax/lenient reviewers should be about this.


 -David

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-01 Thread Nick Chase
 Possible ways of remote interaction:
 - direct hangout participation (key people)
 - questions/comments section in etherpad (delay, more difficult to follow
- might need some assistant for tracking them)

I have a realtime browser-based chat app that is normally used in
conjunction with live events being streamed out of Second Life. I would be
happy to volunteer it for this.

Someone would have to monitor it for questions during the session and ask
them out loud, but it does provide a log, and also has the advantage that
we can leave it up outside the session for additional conversations.

(Not sure actually what the difference would be from IRC, actually, though.)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] When is it okay for submitters to say 'I don't want to add tests' ?

2013-11-01 Thread Khanh-Toan Tran
Hey thanks a lot!

- Original Message -
From: Clint Byrum cl...@fewbar.com
To: openstack-dev openstack-dev@lists.openstack.org
Sent: Thursday, October 31, 2013 7:49:55 PM
Subject: Re: [openstack-dev] When is it okay for submitters to say 'I don't 
want to add tests' ?

Excerpts from Khanh-Toan Tran's message of 2013-10-31 07:22:06 -0700:
 Hi all,
 
 As a newbie of the community, I'm not familiar with unittest and how to use 
 it here. I've learned that Jenkins runs tests
 everytime we submit some code. But how to write the test and what is a 'good 
 test' and a 'bad test'? I saw some commits
 in gerrit but am unable to say if the written test is enough to judge the 
 code, since it is the author of the code who writes
 the test. Is there a framework to follow or some rules/pratices to respect?
 
 Do you have some links to help me out?
 

This is a nice synopsis of the concept of test driven development:

http://net.tutsplus.com/tutorials/python-tutorials/test-driven-development-in-python/

In OpenStack we always put tests in  _base_module_name_/tests, So if you
are working on nova, you can see the unit tests in:

nova/tests

You can generally always run the tests by installing the 'tox' python
module/command on your system and running 'tox' in the root of the git
repository.

Projects use various testing helpers to make tests easier to read and
write. The most common one is testtools. A typical test will look like
this:


import testtools

from basemodule import submodule


class TestSubmoduleFoo(testtools.TestCase):
def test_foo_apple(self):
self.assertEquals(1, submodule.foo('apple'))

def test_foo_banana(self):
self.assertEquals(0, submodule.foo('banana'))


Often unit tests will include mocks and fakes to hide real world
interfacing code from the unit tests. You would do well to read up on
how those concepts work as well, google for 'python test mocking' and
'python test fakes'.

Good luck, and #openstack-dev is always there to try and help. :)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Some initial code copying for db/migration

2013-11-01 Thread Clayton Coleman


- Original Message -

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L420

  https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L43
 
 This API and these models are what we are trying to avoid exposing to
 the rest of nova. By wrapping these in our NovaObject-based structures,
 we can bundle versioned data and methods together which is what we need
 for cross-version compatibility and parity for the parts of nova that
 are not allowed to talk to the database directly.
 
 See the code in nova/objects/* for the implementations. Right now, these
 just call into the db_api.py, but eventually we want to move the actual
 database implementation into the objects themselves and hopefully
 dispense with most or all of the sqlalchemy/* stuff. This also provides
 us the ability to use other persistence backends that aren't supported
 by sqlalchemy, or that don't behave like it does.
 
 If you're going to be at the summit, come to the objects session on
 Thursday where we'll talk about this in more detail. Other projects have
 expressed interest in moving the core framework into Oslo so that we're
 all doing things in roughly the same way. It would be good to get you
 started on the right way early on before you have the migration hassle
 we're currently enjoying in Nova :)
 

Good idea, I'll dig through the code on the plane :) 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Dissecting the very first review

2013-11-01 Thread Noorul Islam K M

Now we have the first patch [1] merged into the repository using
OpenStack review process. I would like to bring into notice some minor
issues.

First of all I would like to thank [2] Swapnil for fixing the patch.

1. Look at patch set 3 and it changed the Author and also the Committer. I
   am not sure how that happened. I have been using gerrit outside of
   OpenStack and I never saw something like that.

2. Another strange part is that, the author date is Oct 1, 2013 12:53 PM

Also an ideal process for helping with others patch is discussed in [2].

Regards,
Noorul

[1] https://review.openstack.org/#/c/54877/
[2] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg05998.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Dissecting the very first review

2013-11-01 Thread Clark Boylan
On Fri, Nov 1, 2013 at 6:29 PM, Noorul Islam K M noo...@noorul.com wrote:

 Now we have the first patch [1] merged into the repository using
 OpenStack review process. I would like to bring into notice some minor
 issues.

 First of all I would like to thank [2] Swapnil for fixing the patch.

 1. Look at patch set 3 and it changed the Author and also the Committer. I
am not sure how that happened. I have been using gerrit outside of
OpenStack and I never saw something like that.

 2. Another strange part is that, the author date is Oct 1, 2013 12:53 PM

 Also an ideal process for helping with others patch is discussed in [2].

 Regards,
 Noorul

 [1] https://review.openstack.org/#/c/54877/
 [2] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg05998.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The Author, Committer, and Date are determined by the local git making
the commit. Gerrit is just displaying what was pushed to it. As a
reviewer if those items are important you can ask the author of a
patchset to push a new patchset that includes updated and potentially
more correct information. This may involve correcting local settings
(eg system clock) or you can override the values by passing the
'--author' and '--date' options to `git commit`.

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Tracking technology choices using blue prints

2013-11-01 Thread Noorul Islam K M

I was looking at the review [1]. And at that point I vaguely remembered
a discussion on IRC about framework choice for WSGI. But those
discussions are not captured in document and brought to conclusion. So,
I think it will be great, if we create blue print for all technology
choices.

Any thoughts on this?

Regards,
Noorul

[1] https://review.openstack.org/#/c/54989/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Dissecting the very first review

2013-11-01 Thread Noorul Islam K M
Clark Boylan clark.boy...@gmail.com writes:

 On Fri, Nov 1, 2013 at 6:29 PM, Noorul Islam K M noo...@noorul.com wrote:


 Now we have the first patch [1] merged into the repository using
 OpenStack review process. I would like to bring into notice some minor
 issues.

 First of all I would like to thank [2] Swapnil for fixing the patch.

 1. Look at patch set 3 and it changed the Author and also the Committer. I
am not sure how that happened. I have been using gerrit outside of
OpenStack and I never saw something like that.

 2. Another strange part is that, the author date is Oct 1, 2013 12:53 PM

 Also an ideal process for helping with others patch is discussed in [2].

 Regards,
 Noorul

 [1] https://review.openstack.org/#/c/54877/
 [2] 
 http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg05998.html

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 The Author, Committer, and Date are determined by the local git making
 the commit. Gerrit is just displaying what was pushed to it. As a
 reviewer if those items are important you can ask the author of a
 patchset to push a new patchset that includes updated and potentially
 more correct information. This may involve correcting local settings
 (eg system clock) or you can override the values by passing the
 '--author' and '--date' options to `git commit`.


The point I am trying to make is that, these things should be looked
into before the patch gets merged.

Thanks and Regards
Noorul

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Tracking technology choices using blue prints

2013-11-01 Thread Adrian Otto
Noorul,

I agree that key decisions should be tracked in blueprints. This is the one for 
this decision which was made in our 2013-10-18 public meeting. Jay's submission 
is consistent with the direction indicated by the team.

https://blueprints.launchpad.net/solum/+spec/rest-api-base

Transcript log:
http://irclogs.solum.io/2013/solum.2013-10-08-16.01.log.html

Regards,

Adrian


On Nov 1, 2013, at 6:55 PM, Noorul Islam K M 
noo...@noorul.commailto:noo...@noorul.com
 wrote:


I was looking at the review [1]. And at that point I vaguely remembered
a discussion on IRC about framework choice for WSGI. But those
discussions are not captured in document and brought to conclusion. So,
I think it will be great, if we create blue print for all technology
choices.

Any thoughts on this?

Regards,
Noorul

[1] https://review.openstack.org/#/c/54989/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Dissecting the very first review

2013-11-01 Thread Adrian Otto

On Nov 1, 2013, at 7:03 PM, Noorul Islam K M 
noo...@noorul.commailto:noo...@noorul.com
 wrote:

Clark Boylan clark.boy...@gmail.commailto:clark.boy...@gmail.com writes:

On Fri, Nov 1, 2013 at 6:29 PM, Noorul Islam K M 
noo...@noorul.commailto:noo...@noorul.com wrote:


Now we have the first patch [1] merged into the repository using
OpenStack review process. I would like to bring into notice some minor
issues.

First of all I would like to thank [2] Swapnil for fixing the patch.

1. Look at patch set 3 and it changed the Author and also the Committer. I
  am not sure how that happened. I have been using gerrit outside of
  OpenStack and I never saw something like that.

2. Another strange part is that, the author date is Oct 1, 2013 12:53 PM

Also an ideal process for helping with others patch is discussed in [2].

Regards,
Noorul

[1] https://review.openstack.org/#/c/54877/
[2] http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg05998.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The Author, Committer, and Date are determined by the local git making
the commit. Gerrit is just displaying what was pushed to it. As a
reviewer if those items are important you can ask the author of a
patchset to push a new patchset that includes updated and potentially
more correct information. This may involve correcting local settings
(eg system clock) or you can override the values by passing the
'--author' and '--date' options to `git commit`.


The point I am trying to make is that, these things should be looked
into before the patch gets merged.

That date looks correct to me. I created/modified README.rst on October 1st, 
and that date was correctly imported into Stackforge from the original upstream 
repo. That's where it came from. I don't think anything is malfunctioning. The 
committer is labeled as:

Swapnil 
Kulkarnihttps://review.openstack.org/#/dashboard/7051swapnilkulkarni2...@gmail.commailto:swapnilkulkarni2...@gmail.comNov
 1, 2013 12:27 PM

That also looks correct, because that's the author who tweaked the commit so it 
would build, which is what I approved for merge. I recognize that the root 
cause of the Verify trouble were missing newline characters in the initial 
import, and had nothing to do with the new files being added with patch set 1. 
Let's move past this one together and continue supporting each other so we can 
continue this great forward momentum.

Thanks,

Adrian


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Do we need to clean up resource_id after deletion?

2013-11-01 Thread Clint Byrum
Excerpts from Christopher Armstrong's message of 2013-11-01 11:34:56 -0700:
 Vijendar and I are trying to figure out if we need to set the resource_id
 of a resource to None when it's being deleted.
 
 This is done in a few resources, but not everywhere. To me it seems either
 
 a) redundant, since the resource is going to be deleted anyway (thus
 deleting the row in the DB that has the resource_id column)
 b) actively harmful to useful debuggability, since if the resource is
 soft-deleted, you'll not be able to find out what physical resource it
 represented before it's cleaned up.
 
 Is there some specific reason we should be calling resource_id_set(None) in
 a check_delete_complete method?
 

I've often wondered why some do it, and some don't.

Seems to me that it should be done not inside each resource plugin but
in the generic resource handling code.

However, I have not given this much thought. Perhaps others can provide
insight into why it has been done that way.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev