[openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-14 Thread Dmitri Zimine
We touched this on review https://review.openstack.org/#/c/73205/, and fixed a 
bit, bringing it up here to further discuss at slightly higher level. 

Let's go over a tiny bit of  YAML definition, clarifying terminology on the way.

Current DSL snippet: 
actions:
   my-action
  parameters:
  foo: bar
  response: # just agreed to change to 'results' 
  select: '$.server_id'  
  store_as: v1

In the code, we refer to action.result_helper

1) Note that response is not exactly a parameter. It doesn't doesn't refer to 
data. It's  (query, variable) pairs, that are used to parse the results and 
post data to global context [1]. The terms response, or result, do not reflect 
what is actually happening here. Suggestions? Save? Publish? Result Handler? 

2) Whichever name we use for this output transformer, shall it be under 
parameters?   

3) how do we call action/task parameters? Think 'model' (which reflects in 
yaml,  code, docs, talk, etc.)
   input and output? (+1)
   in and out? (-1)  
   request and response? (-1) good for WebServices but not generic enough
   parameters and results? (ok)

4) Syntax simplification: can we drop 'parameters' keyword? Anything under 
action is action parameters, unless it's a reserved keyword, which the 
implementation can parse out. 

actions:
   my-action
  foo: bar
  task-parameters: # not a keyword, specific to REST_API
  flavor_id:
  image_id:
  publish:  # keyword
  select: '$.server_id'  
  store_as: v1

[1] Save discussing the way we work with data flow, and talking alternatives, 
to the next time. 

DZ. 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Need more sample HOT templates for users

2014-02-14 Thread Qiming Teng
On Fri, Feb 14, 2014 at 08:24:09AM +0100, Thomas Spatzier wrote:

Thanks, Thomas.

The first link actually provides a nice inventory of all Resources and
their properties, attributes, etc.  I didn't look into this because I
was thinking of the word 'developer' differently.  This pointer is
useful for template developers in the sense that they don't have to
check the source code to know a resource type.

Maybe more elaborated explanation of resource usage is some work that
can be left to book or manual authors. 

Regards,
  - Qiming

 Hi Qiming,
 
 not sure if you have already seen it, but there is some documentation
 available at the following locations. If you already know it, sorry for
 dup ;-)
 
 Entry to Heat documentation:
 http://docs.openstack.org/developer/heat/
 
 Template Guide with pointers to more details like documentation of all
 resources:
 http://docs.openstack.org/developer/heat/template_guide/index.html
 
 HOT template guide:
 http://docs.openstack.org/developer/heat/template_guide/hot_guide.html
 
 HOT template spec:
 http://docs.openstack.org/developer/heat/template_guide/hot_spec.html
 
 Regards,
 Thomas
 
 Qiming Teng teng...@linux.vnet.ibm.com wrote on 14/02/2014 06:55:56:
 
  From: Qiming Teng teng...@linux.vnet.ibm.com
  To: openstack-dev@lists.openstack.org
  Date: 14/02/2014 07:04
  Subject: [openstack-dev] [Heat] Need more sample HOT templates for users
 
  Hi,
 
I have been recently trying to convince some co-workers and even some
customers to try deploy and manipulate their applications using Heat.
 
Here are some feedbacks I got from them, which could be noteworthy for
the Heat team, hopefully.
 
- No document can be found on how each Resource is supposed to be
  used. This is partly solved by the adding resource_schema API but it
  seems not yet exposed by heatclient thus the CLI.
 
  In addition to this, resource schema itself may print only simple
  help message in ONE sentence, which could be insufficient for users
  to gain a full understanding.
 
- The current 'heat-templates' project provides quite some samples in
  the CFN format, but not so many in HOT format.  For early users,
  this means either they will get more accustomed to CFN templates, or
  they need to write HOT templates from scratch.
 
  Another suggestion is also related to Resource usage. Maybe more
  smaller HOT templates each focusing on teaching one or two resources
  would be helpful. There could be some complex samples as show cases
  as well.
 
   Some thoughts on documenting the Resources:
 
- The doc can be inlined in the source file, where a developer
  provides the manual of a resource when it is developed. People won't
  forget to update it if the implementation is changed. A Resource can
  provide a 'describe' or 'usage' or 'help' method to be inherited and
  implemented by all resource types.
 
  One problem with this is that code mixed with long help text may be
  annoying for some developers.  Another problem is about
  internationalization.
 
- Another option is to create a subdirectory in the doc directory,
  dedicated to resource usage. In addition to the API references, we
  also provide resource references (think of the AWS CFN online docs).
 
Does this makes senses?
 
  Regards,
- Qiming
 
  -
  Qiming Teng, PhD.
  Research Staff Member
  IBM Research - China
  e-mail: teng...@cn.ibm.com
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-14 Thread Renat Akhmerov

On 14 Feb 2014, at 15:02, Dmitri Zimine d...@stackstorm.com wrote:

 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results’ 

Just a note: “response” indentation here is not correct, it’s not a parameter 
called “response” but rather a property of “my-action”.

   select: '$.server_id'  
   store_as: v1
 
 In the code, we refer to action.result_helper
 
 1) Note that response is not exactly a parameter. It doesn't doesn't refer to 
 data. It's  (query, variable) pairs, that are used to parse the results and 
 post data to global context [1]. The terms response, or result, do not 
 reflect what is actually happening here. Suggestions? Save? Publish? Result 
 Handler? 

For explicitness we can use something like “result-handler” and initially I 
thought about this option. But I personally love conciseness and I think name 
“result” would be ok for this section meaning it defines the structure of the 
action result. “handler” is not 100% precise too because we don’t actually 
handle a result here, we define the rules how to get this result.

I would appreciate to see other suggestions though.

 2) Whichever name we use for this output transformer, shall it be under 
 parameters?

No, what we have in this section is like a return value type for a regular 
method. Parameters define action input.

 3) how do we call action/task parameters? Think 'model' (which reflects in 
 yaml,  code, docs, talk, etc.)
input and output? (+1)
in and out? (-1)  
request and response? (-1) good for WebServices but not generic enough
parameters and results? (ok)

Could you please clarify your questions here? Not sure I’m following...

 4) Syntax simplification: can we drop 'parameters' keyword? Anything under 
 action is action parameters, unless it's a reserved keyword, which the 
 implementation can parse out. 
 
 actions:
my-action
   foo: bar
   task-parameters: # not a keyword, specific to REST_API
   flavor_id:
   image_id:
   publish:  # keyword
   select: '$.server_id'  
   store_as: v1

It will create problems like name ambiguity in case we need to have a parameter 
with the same names as keywords (“task-parameters” and “publish” in your 
example). My preference would be to leave it explicit.

Renat

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Gate] qemu: linux kernel too old to load a ram disk

2014-02-14 Thread sahid
Hello,

It looks since 12 hours the gate fails in 100% of case because 
an error with libvirt (logs/libvirtd.txt):
qemu: linux kernel too old to load a ram disk


Bug reported on openstack-ci:
https://bugs.launchpad.net/openstack-ci/+bug/1280142

Fingerprint:
http://logstash.openstack.org/#eyJzZWFyY2giOiIgbWVzc2FnZTpcInFlbXU6IGxpbnV4IGtlcm5lbCB0b28gb2xkIHRvIGxvYWQgYSByYW0gZGlza1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIiLCJzdGFtcCI6MTM5MjM2NzU5MTY1MX0=

s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-14 Thread Joshua Harlow
+1

Mentoring and devoted mentors and not demotivating new folks (but instead 
growing and fostering them) is IMHO 10x more important than a badge program. 
Badges seem nice and all but I think it's not the biggest win for the buck.

Sent from my really tiny device...

 On Feb 13, 2014, at 6:06 AM, Sean Dague s...@dague.net wrote:
 
 On 02/13/2014 05:37 AM, Thierry Carrez wrote:
 Sandy Walsh wrote:
 The informal OpenStack motto is automate everything, so perhaps we should 
 consider some form of gamification [1] to help us? Can we offer badges, 
 quests and challenges to new users to lead them on the way to being strong 
 contributors?
 
 Fixed your first bug badge
 Updated the docs badge
 Got your blueprint approved badge
 Triaged a bug badge
 Reviewed a branch badge
 Contributed to 3 OpenStack projects badge
 Fixed a Cells bug badge
 Constructive in IRC badge
 Freed the gate badge
 Reverted branch from a core badge
 etc.
 
 I think that works if you only keep the ones you can automate.
 Constructive in IRC for example sounds a bit subjective to me, and you
 don't want to issue those badges one-by-one manually.
 
 Second thing, you don't want the game to start polluting your bug
 status, i.e. people randomly setting bugs to triaged to earn the
 Triaged a bug badge. So the badges we keep should be provably useful ;)
 
 A few other suggestions:
 Found a valid security issue (to encourage security reports)
 Fixed a bug submitted by someone else (to encourage attacking random bugs)
 Removed code (to encourage tech debt reduction)
 Backported a fix to a stable branch (to encourage backporting)
 Fixed a bug that was tagged nobody-wants-to-fix-this-one (to encourage
 people to attack critical / hard bugs)
 
 We might need protected tags to automate this: tags that only some
 people could set to bugs/tasks to designate gate-freeing or
 nobody-wants-to-fix-this-one bugs that will give you badges if you fix
 them.
 
 So overall it's a good idea, but it sounds a bit tricky to automate it
 properly to avoid bad side-effects.
 
 Gamification is a cool idea, if someone were to implement it, I'd be +1.
 
 Realistically, the biggest issue I see with on-boarding is mentoring
 time. Especially with folks completely new to our structure, there is a
 lot of confusing things going on. And OpenStack is a ton to absorb. I
 get pinged a lot on IRC, answer when I can, and sometimes just have to
 ignore things because there are only so many hours in the day.
 
 I think Anita has been doing a great job with the Neutron CI onboarding
 and new folks, and that's given me perspective on just how many
 dedicated mentors we'd need to bring new folks on. With 400 new people
 showing up each release, it's a lot of engagement time. It's also
 investment in our future, as some of these folks will become solid
 contributors and core reviewers.
 
 So it seems like the only way we'd make real progress here is to get a
 chunk of people to devote some dedicated time to mentoring in the next
 cycle. Gamification might be most useful, but honestly I expect a Start
 Here page with the consolidated list of low-hanging-fruit bugs, and a
 Review Here page with all reviews for low hanging fruit bugs (so they
 don't get lost by core review team) would be a great start.
 
 The delays on reviews for relatively trivial fixes I think is something
 that is probably more demotivating to new folks than the lack of badges.
 So some ability to keep on top of that I think would be really great.
 
-Sean
 
 -- 
 Sean Dague
 Samsung Research America
 s...@dague.net / sean.da...@samsung.com
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver problems

2014-02-14 Thread Gary Kotton
Hi,
We are currently looking into that.
Thanks
Gary

From: Jay Lau jay.lau@gmail.commailto:jay.lau@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 13, 2014 11:14 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver 
problems

Thanks Gary.

What about live migration with VCDriver, currently I cannot do live migration 
in the condition of between ESX servers in one cluster.

2014-02-13 16:47 GMT+08:00 Gary Kotton 
gkot...@vmware.commailto:gkot...@vmware.com:
Hi,
The commit 
https://github.com/openstack/nova/commit/c4bf32c03283cbedade9ab8ca99e5b13b9b86ccbhttps://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack/nova/commit/c4bf32c03283cbedade9ab8ca99e5b13b9b86ccbk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lzrbNoejQXG1NI2jpS6g%2FYJanvcIezST4Uenp6Sd5BI%3D%0As=1a39cfb2c41b8ce978956de28a2773a98b75bcfe4c0f135905dc6aa3257b9570
 added a warning that the ESX driver is not tested. My understanding is that 
there are a number of people using the ESX driver so it should not be 
deprecated. In order to get the warning removed we will need to have CI on the 
driver. As far as I know there is no official decision to deprecate it.
Thanks
Gary

From: Jay Lau jay.lau@gmail.commailto:jay.lau@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, February 13, 2014 4:00 AM
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver problems

Greetings,

I was now doing some integration with VMWare VCDriver and have some questions 
during the integration work.

1) In Hong Kong Summit, it was mentioned that ESXDriver will be dropped, so do 
we have any plan when to drop this driver?
2) There are many good features in VMWare was not supportted by VCDriver, such 
as live migration, cold migration and resize within one vSphere cluster, also 
we cannot get individual ESX Server details via VCDriver.

Do we have some planning to make those features worked?

--
Thanks,

Jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lzrbNoejQXG1NI2jpS6g%2FYJanvcIezST4Uenp6Sd5BI%3D%0As=b1d6c73107a271d9b3e2c6948bb4bc32185a964d0af83eb60a510d180a4d75f6




--
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-14 Thread Sylvain Bauza
Instead of limitating the consumed bandwidth by proposiong a configuration
flag (yet another one, and which default value to be set ?), I would
propose to only decrease the niceness of the process itself, so that other
processes would get first the I/O access.
That's not perfect I assume, but that's a quick workaround limitating the
frustration.

-Sylvain


2014-02-14 4:52 GMT+01:00 Wangpan hzwang...@corp.netease.com:

   Currently nova doesn't limit the disk IO bandwidth in copy_image()
 method while creating a new instance, so the other instances on this host
 may be affected by this high disk IO consuming operation, and some
 time-sensitive business(e.g RDS instance with heartbeat) may be switched
 between master and slave.

 So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead
 of `cp src dst` while copy_image in create_image() of libvirt driver, the
 remote image copy operation also can be limited by `rsync
 --bwlimit=${bandwidth}` or `scp -l=${bandwidth}`, this parameter
 ${bandwidth} can be a new configuration in nova.conf which allow cloud
 admin to config it, it's default value is 0 which means no limitation, then
 the instances on this host will be not affected while a new instance with
 not cached image is creating.

 the example codes:
 nova/virt/libvit/utils.py:
 diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py
 index e926d3d..5d7c935 100644
 --- a/nova/virt/libvirt/utils.py
 +++ b/nova/virt/libvirt/utils.py
 @@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
  # sparse files.  I.E. holes will not be written to DEST,
  # rather recreated efficiently.  In addition, since
  # coreutils 8.11, holes can be read efficiently too.
 -execute('cp', src, dest)
 +if CONF.mbps_in_copy_image  0:
 +execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image *
 1024, src, dest)
 +else:
 +execute('cp', src, dest)
  else:
  dest = %s:%s % (host, dest)
  # Try rsync first as that can compress and create sparse dest
 files.
 @@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
  # Do a relatively light weight test first, so that we
  # can fall back to scp, without having run out of space
  # on the destination for example.
 -execute('rsync', '--sparse', '--compress', '--dry-run', src,
 dest)
 +if CONF.mbps_in_copy_image  0:
 +execute('rsync', '--sparse', '--compress', '--dry-run',
 +'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024,
 src, dest)
 +else:
 +execute('rsync', '--sparse', '--compress', '--dry-run',
 src, dest)
  except processutils.ProcessExecutionError:
 -execute('scp', src, dest)
 +if CONF.mbps_in_copy_image  0:
 +execute('scp', '-l', '%s' % CONF.mbps_in_copy_image *
 1024 * 8, src, dest)
 +else:
 +execute('scp', src, dest)
  else:
 -execute('rsync', '--sparse', '--compress', src, dest)
 +if CONF.mbps_in_copy_image  0:
 +execute('rsync', '--sparse', '--compress',
 +'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024,
 src, dest)
 +else:
 +execute('rsync', '--sparse', '--compress', src, dest)


 2014-02-14
 --
  Wangpan

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-14 Thread Lingxian Kong
2014-02-13 23:19 GMT+08:00 Jay Pipes jaypi...@gmail.com:

 On Thu, 2014-02-13 at 09:38 -0500, David Kranz wrote:
  I was recently bitten by a case where some defaults in keystone.conf
  were not appropriate for real deployment, and our puppet modules were
  not providing better values
  https://bugzilla.redhat.com/show_bug.cgi?id=1064061. Since there are
  hundreds (thousands?) of options across all the services. I am wondering
  whether there are other similar issues lurking and if we have done what
  we can to flush them out.
 
  Defaults in conf files seem to be one of the following:
 
  - Generic, appropriate for most situations
  - Appropriate for devstack
  - Appropriate for small, distro-based deployment
  - Approprate for large deployment
 
  Upstream, I don't think there is a shared view of how defaults should be
  chosen.
 
  Keeping bad defaults can have a huge impact on performance and when a
  system falls over but the problems may not be visible until some time
  after a system gets into real use. Have the folks creating our puppet
  modules and install recommendations taken a close look at all the
  options and determined
  that the defaults are appropriate for deploying RHEL OSP in the
  configurations we are recommending?

 This is a very common problem in the configuration management space,
 frankly. One good example is the upstream mysql Chef cookbook keeping
 ludicrously low InnoDB buffer pool, log and data file sizes. The
 defaults from MySQL -- which were chosen, frankly, in the 1990s, are
 useful for nothing more than a test environment, but unfortunately they
 propogate to far too many deployments with folks unaware of the serious
 side-effects on performance and scalability until it's too late [1].

 I think it's an excellent idea to do a review of the values in all of
 the configuration files and do the following:

 * Identify settings that simply aren't appropriate for anything and make
 the change to a better default.

 * Identify settings that need to scale with the size of the underlying
 VM or host capabilities, and provide patches to the configuration file
 comments that clearly indicate
 
 a recommended scaling factor. Remember
 that folks writing Puppet modules, Ansible scripts, Salt SLS files, and
 Chef cookbooks look first to the configuration files to get an idea of
 how to set the values.


Good idea! +1 for providing a recommended scaling factor for the related
settings




 Best,
 -jay

 [1] The reason I say it's too late is because for some configuration
 value -- notably innodb_log_file_size and innodb_data_file_size -- it is
 not possible to change the configuration values after data has been
 written to disk. You need to literally dump the contents of the DBs and
 reload the database after removing the files and restarting the DBs
 after changing the configuration options in my.cnf. See this bug for
 details on this pain in the behind:

 https://tickets.opscode.com/browse/COOK-2100


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
*---*
*Lingxian Kong*
Huawei Technologies Co.,LTD.
IT Product Line CloudOS PDU
China, Xi'an
Mobile: +86-18602962792
Email: konglingx...@huawei.com; anlin.k...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-14 Thread Nick Ma
I'm also interested in it. UTC8.

-- 

cheers,
Li Ma


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-14 Thread sahid
It could be a good idea but as Sylvain said how to configure this? Then, what 
about using scp instead of rsync for a local copy?

- Original Message -
From: Wangpan hzwang...@corp.netease.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Friday, February 14, 2014 4:52:20 AM
Subject: [openstack-dev] [nova] Should we limit the disk IO bandwidth in
copy_image while creating new instance?

Currently nova doesn't limit the disk IO bandwidth in copy_image() method while 
creating a new instance, so the other instances on this host may be affected by 
this high disk IO consuming operation, and some time-sensitive business(e.g RDS 
instance with heartbeat) may be switched between master and slave.

So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead of `cp 
src dst` while copy_image in create_image() of libvirt driver, the remote image 
copy operation also can be limited by `rsync --bwlimit=${bandwidth}` or `scp 
-l=${bandwidth}`, this parameter ${bandwidth} can be a new configuration in 
nova.conf which allow cloud admin to config it, it's default value is 0 which 
means no limitation, then the instances on this host will be not affected while 
a new instance with not cached image is creating.

the example codes:
nova/virt/libvit/utils.py:
diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py
index e926d3d..5d7c935 100644
--- a/nova/virt/libvirt/utils.py
+++ b/nova/virt/libvirt/utils.py
@@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
 # sparse files.  I.E. holes will not be written to DEST,
 # rather recreated efficiently.  In addition, since
 # coreutils 8.11, holes can be read efficiently too.
-execute('cp', src, dest)
+if CONF.mbps_in_copy_image  0:
+execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, 
src, dest)
+else:
+execute('cp', src, dest)
 else:
 dest = %s:%s % (host, dest)
 # Try rsync first as that can compress and create sparse dest files.
@@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
 # Do a relatively light weight test first, so that we
 # can fall back to scp, without having run out of space
 # on the destination for example.
-execute('rsync', '--sparse', '--compress', '--dry-run', src, dest)
+if CONF.mbps_in_copy_image  0:
+execute('rsync', '--sparse', '--compress', '--dry-run',
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest)
+else:
+execute('rsync', '--sparse', '--compress', '--dry-run', src, 
dest)
 except processutils.ProcessExecutionError:
-execute('scp', src, dest)
+if CONF.mbps_in_copy_image  0:
+execute('scp', '-l', '%s' % CONF.mbps_in_copy_image * 1024 * 
8, src, dest)
+else:
+execute('scp', src, dest)
 else:
-execute('rsync', '--sparse', '--compress', src, dest)
+if CONF.mbps_in_copy_image  0:
+execute('rsync', '--sparse', '--compress',
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest)
+else:
+execute('rsync', '--sparse', '--compress', src, dest)


2014-02-14



Wangpan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] project name collision - renaming required

2014-02-14 Thread Sergey Lukjanov
Hi folks,

It was decided to remove some names from the initial voting, please, see
them in meeting logs -
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-13-18.00.html

Thanks.


On Thu, Feb 13, 2014 at 4:34 PM, Sergey Lukjanov slukja...@mirantis.comwrote:

 Please, note, that I'm planning today to setup first round of name voting
 to select top N options and trash bad options. It'll be done after the IRC
 team meeting.


 On Sat, Feb 8, 2014 at 9:34 AM, Sergey Lukjanov slukja...@mirantis.comwrote:

 There are already some names proposed in the etherpad, so, I'll setup a
 voting to choose top N options and discuss them detailed. The voting will
 be started after the next IRC team meeting next week, Feb 13.

 Thanks.

 P.S. Looking forward for new name proposals :)


 On Thu, Jan 30, 2014 at 10:39 AM, Sergey Lukjanov slukja...@mirantis.com
  wrote:

 Hi folks,

 As part of preparations for graduation from incubation [0], I've
 contacted OpenStack Foundation Marketing team to ensure that everything is
 ok with our program and project names from their point of view. Thanks for
 Lauren Sell for providing info.

 Thetus Corporation already use 'Savanna' as the name for one of their
 technologies [1]. Thetus doesn't actually hold any registered trademark for
 'Savanna', but they have common-law rights to the mark because they have
 established use and marketed it since 2010, so, in case if we applied for a
 trademark, they could certainly challenge us and win. So, the answer from
 marketing team was that it's a pretty high risk to continue using our
 lovely name... The most sad part is that I couldn't google their site using
 words savanna, hadoop, cloud and etc.

 Let's move on to the new name selection process. I'm proposing one or
 two weeks for brainstorming and sharing your thoughts about the project
 name depending on number of suitable options, then I'll probably setup the
 small voting to select the best option.

 I've created an etherpad [2] for discussing new name options, so, the
 process is send your options to this thread and then go to the etherpad [2]
 to discuss them. Please, don't forget to google your options to avoid one
 more renaming in future ;)

 Thanks, looking forward for you thoughts.

 [0]
 http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements-
  Project should have engaged with marketing team to check suitable
 official name
 [1] http://www.thetus.com/savanna
 [2] https://etherpad.openstack.org/p/savanna-renaming

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.




-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Gate] qemu: linux kernel too old to load a ram disk

2014-02-14 Thread Qiu Yu
On Fri, Feb 14, 2014 at 4:58 PM, sahid sahid.ferdja...@cloudwatt.comwrote:

 Hello,

 It looks since 12 hours the gate fails in 100% of case because
 an error with libvirt (logs/libvirtd.txt):
 qemu: linux kernel too old to load a ram disk


 Bug reported on openstack-ci:
 https://bugs.launchpad.net/openstack-ci/+bug/1280142

 Fingerprint:

 http://logstash.openstack.org/#eyJzZWFyY2giOiIgbWVzc2FnZTpcInFlbXU6IGxpbnV4IGtlcm5lbCB0b28gb2xkIHRvIGxvYWQgYSByYW0gZGlza1wiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIiLCJzdGFtcCI6MTM5MjM2NzU5MTY1MX0=

 s.


Just marked it as a duplicate of
https://bugs.launchpad.net/openstack-ci/+bug/1280072

Seems glance is not happy with newly released python-swiftclient 2.0, and
then with corrupted image, all vm provisioning simply fail.

--
Qiu Yu
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] renaming: initial voting

2014-02-14 Thread Sergey Lukjanov
Hi folks,

I've created a poll to select 10 candidates for new Savanna name. It's a
first round of selecting new name for our lovely project. This poll will be
ended in Monday, Feb 17.

You should receive an email from Sergey Lukjanov (CIVS poll supervisor)
slukja...@mirantis.com  via cs.cornell.edu with topic Poll: Savanna new
name candidates.

Thank you!

P.S. I've bcced all ATCs, don't panic.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Ralf Haferkamp
Hi

On Fri, Feb 14, 2014 at 10:27:20AM +1300, Robert Collins wrote:
 So progressing with the 'and folk that want to use packages can' arc,
 we're running into some friction.
 
 I've copied -operators in on this because its very relevant IMO to operators 
 :)
 
 So far:
  - some packages use different usernames
  - some put things in different places (and all of them use different
 places to the bare metal ephemeral device layout which requires
 /mnt/).
  - possibly more in future.
 
 Now, obviously its a 'small matter of code' to deal with this, but the
 impact on ops folk isn't so small. There are basically two routes that
 I can see:
 
 # A
  - we have a reference layout - install from OpenStack git / pypi
 releases; this is what we will gate on, and can document.
  - and then each distro (both flavor of Linux and also possibly things
 like Fuel that distribution OpenStack) is different - install on X,
 get some delta vs reference.
  - we need multiple manuals describing how to operate and diagnose
 issues in such a deployment, which is a matrix that overlays platform
 differences the user selects like 'Fedora' and 'Xen'.
I agree with what James already said here. It probably not TripleO's job to
document all that.  A good documentation for the reference layout should be the
goal.

And currently the differences aren't all that big I think. And for some of them
we already have good solutions (like e.g. the os-svc-* tools). There is room
for improvement in handling of the differences for usernames, though :)

 # B
  - we have one layout, with one set of install paths, usernames
  - package installs vs source installs make no difference - we coerce
 the package into reference upstream shape as part of installing it.
Unless I am completely missunderstanding your proposal I think this would void
many of the reasons why people would choose to install from packages in the
first place.

  - documentation is then identical for all TripleO installs, except
 the platform differences (as above - systemd on Fedora, upstart on
 Ubuntu, Xen vs KVM)
 
 B seems much more useful to our ops users - less subtly wrong docs, we
 avoid bugs where tools we write upstream make bad assumptions,
 experience operating a TripleO deployed OpenStack is more widely
 applicable (applies to all such installs, not just those that happened
 to use the same package source).
I am propably repeating much of what James already said already. But I think an
operator that makes the decision to do a package base Triplo installation does
so e.g. because he is familiar with the tools and conventions of the specific
distro/provider of the packages he choose. And probably wants TripleO to be
consistent with that. And yes, with the decision for packages, he decides to
differ from the reference layout.

 I see this much like the way Nova abstracts out trivial Hypervisor
 differences to let you 'nova boot' anywhere, that we should be hiding
 these incidental (vs fundamental capability) differences.
 
 What say ye all?
 
 -Robv

-- 
regards,
Ralf

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] vhost-{pid} takes up cpu

2014-02-14 Thread Yongsheng Gong
Hi deal stackers,

I am running a devstack with two nodes:
one is controller (no nova-compute running) and other is compute.

I am using neutron ml2 plugin and ovs agent with GRE tunnel.

I started a VM and tried to run iperf testing:
1. start iperf as server role in the VM, which has a floating ip
192.168.10.10:
iperf -s

2 start iperf as client role on the controler node to test the VM via
floating ip:
iperf -c 192.168.10.10 -t 120

at the same time, run top on compute node, I found the vhost-{pid} was
taking too much cpu resource ( more than 75%):

Tasks: 230 total,   2 running, 228 sleeping,   0 stopped,   0 zombie
%Cpu0  : 23.7 us,  9.1 sy,  0.0 ni, 66.9 id,  0.0 wa,  0.0 hi,  0.3 si,
 0.0 st
%Cpu1  : 10.2 us,  2.7 sy,  0.0 ni, 86.3 id,  0.0 wa,  0.0 hi,  0.7 si,
 0.0 st
%Cpu2  :  0.0 us, 19.8 sy,  0.0 ni,  5.2 id,  0.0 wa,  2.2 hi, 72.8 si,
 0.0 st
%Cpu3  : 12.6 us,  4.2 sy,  0.0 ni, 83.2 id,  0.0 wa,  0.0 hi,  0.0 si,
 0.0 st
KiB Mem:  12264832 total,  1846428 used, 10418404 free,43692 buffers
KiB Swap:0 total,0 used,0 free,   581572 cached

  PID USER  PR  NI  VIRT  RES  SHR S  %CPU %MEMTIME+  COMMAND

 4073 root  20   0 000 R  79.8  0.0   3:26.97 vhost-4070


gongysh@gongysh-p6535cn:~$ ps -ef | grep 4070
119   4070 1 31 19:24 ?00:04:13 qemu-system-x86_64 -machine
accel=kvm:tcg -name instance-0002 -S -M pc-i440fx-1.4 -m 2048 -smp
1,sockets=1,cores=1,threads=1 -uuid 5cbfb914-d16c-4b51-a057-50f2da827830
-smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack
Nova,version=2014.1,serial=94b43180-e901-1015-b061-90cecbca80a3,uuid=5cbfb914-d16c-4b51-a057-50f2da827830
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0002.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -no-hpet -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none
-device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev
tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ce:da:ae,bus=pci.0,addr=0x3
-chardev
file,id=charserial0,path=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/console.log
-device isa-serial,chardev=charserial0,id=serial0 -chardev
pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc
0.0.0.0:0 -k en-us -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


This is a big performance problem, imagine almost all the cpu resources
will be consumed by the vhost-{pid}s if we have many VMs and all are
running with full speed network traffic.


any ideas?

regards,
yong sheng gong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-14 Thread Victoria Martínez de la Cruz
Hi all,

I think we should separate between mentoring efforts and documentation.

For the first one, is true that there are always tons of tasks to do but
sometimes a beginner won't find them because they are not familiar with the
workflow neither with the community. People that is not involved with
open-source just doesn't know where to start when reaching to an
organization of this kind. So, having someone to ask about this is usually
a great help. That is something OpenHatch is intended to solve.

I must say that OpenHatch is not time consuming and helped several people
to join OpenStack and start contributing to it. And that is great :)

Another thing we tried last year is #openstack-101, a channel for new
contributors where they are free to ask any doubt they could have. I'm
happy to say this worked as a hub between newcomers and the community and
that lots of people have been able to start working with us.

About documentation, I agree that it could be quite overwhelming the first
time, but that is how complex our organization is. The thing is... we get
used to that.

Perhaps we could ask new contributors what they would like to find in the
'How to contribute' wiki page and refine it to make it easier for new
people (I volunteer for that!).

Finally, I wanted to mention that mentoring is great. I still have many
things to learn, but I have been able to guide people with their first
steps in the community and is cool to see how a little effort like that
mean, later in time, great contributions.

Thanks all for the feedback,

Victoria



2014-02-13 23:46 GMT-03:00 Luis de Bethencourt l...@debethencourt.com:

 On 13 February 2014 21:09, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-02-12 14:42:17 -0600 (-0600), Dolph Mathews wrote:
 [...]
  There's a lot of such scenarios where new contributors can
  quickly find things to contribute, or at lest provide incredibly
  valuable feedback to the project in the form of reviews!
 [...]

 I heartily second the suggestion. The biggest and best thing I did
 as a new contributor was to start reviewing changes first thing. An
 initial contributor, if they have any aptitude for software
 development at all, will be able to tell a ton about our development
 community by how it interacts through code review. The test-centric
 methodology, style guidelines and general level of
 acceptance/tolerance for various things become immediately apparent.
 You also get to test your understanding of the source by watching
 all the mistakes other reviewers find that you missed in your
 reviewing. Refine and repeat.

 Getting a couple of very simple changes in right away also helps you
 pick up the workflow and toolset, but reviewing others changes is a
 huge boon to both the project and the would-be contributors doing
 the reviewing... much more so than correcting a handful of
 typographical errors.
 --
 Jeremy Stanley



 That is a very good idea Jeremy.

 I started learning and contributing to OpenStack yesterday. I have been
 writing down all the things I do, read and discover. Planning to blog about
 and share it. I think it would be valuable to show how to contribute and
 learn the project from the point of view of a novice to it.

 Cheers,
 Luis


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] User Signup

2014-02-14 Thread Soren Hansen
2014-02-10 17:03 GMT+01:00 Kieran Spear kisp...@gmail.com:
 On 10 February 2014 08:27, Soren Hansen so...@linux2go.dk wrote:
 I agree that putting admin credentials on a public web server is a
 security risk, but I'm not sure why a set of restricted admin
 credentials that only allow you to create users and tenants is a
 bigger problem than the credentials for separate registration service
 that performs the exact same operations?
 The third (and most dangerous) operation here is the role grant. I
 don't think any Keystone policy could be specific enough to prevent
 arbitrary member role assignment in this case.

Fair enough. That seems like something we should fix, though. It really
seems to me like adding this intermediate service is an overly
complicated (although necessary given the current constraints) approach.

User registration seems like something that very much falls under
Keystone's domain:

 * Keystone should abstract any and all interaction with the user
   database. Having another service that adds things directly to MySQL
   or LDAP seems wrong to me.

 * Having a component whose only job is to talk to Keystone really
   screams to me that it ought to be part of Keystone.

Perhaps a user registration API extension that lets you pass just
username/password/whatever and then it creates the relevant things on
the backend in a way that's configured in Keystone. I.e. it validates
the request and then creates the user and tenant and grants the
appropriate roles.

As I see it, if we don't trust Keystone's security, we're *so* screwed
anyway. This needs to work.

-- 
Soren Hansen | http://linux2go.dk/
Ubuntu Developer | http://www.ubuntu.com/
OpenStack Developer  | http://www.openstack.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-14 Thread Nikolay Makhotkin
Current DSL snippet:
actions:
   my-action
  parameters:
  foo: bar
  response: # just agreed to change to 'results'
  select: '$.server_id'
  store_as: v1

'result' sounds better than 'response' and, I think, more fit to action
description.
And I suggest for a new word - 'output'; actually, this block describe how
the output will be taken and stored.

However, I agree that this block should be at action-property level:

actions:
   my-action
  result:
 select: '$.server_id'
 store_as: vm_id
  parameters:
 foo: bar



On Fri, Feb 14, 2014 at 12:36 PM, Renat Akhmerov rakhme...@mirantis.comwrote:


 On 14 Feb 2014, at 15:02, Dmitri Zimine d...@stackstorm.com wrote:

 Current DSL snippet:
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results'


 Just a note: response indentation here is not correct, it's not a
 parameter called response but rather a property of my-action.

   select: '$.server_id'
   store_as: v1

 In the code, we refer to action.result_helper

 1) Note that *response* is not exactly a parameter. It doesn't doesn't
 refer to data. It's  (query, variable) pairs, that are used to parse the
 results and post data to global context [1]. The terms response, or result,
 do not reflect what is actually happening here. Suggestions? Save? Publish?
 Result Handler?


 For explicitness we can use something like result-handler and initially
 I thought about this option. But I personally love conciseness and I think
 name result would be ok for this section meaning it defines the structure
 of the action result. handler is not 100% precise too because we don't
 actually handle a result here, we define the rules how to get this result.

 I would appreciate to see other suggestions though.

 2) Whichever name we use for this output transformer, shall it be under
 parameters?


 No, what we have in this section is like a return value type for a regular
 method. Parameters define action input.

 3) how do we call action/task parameters? Think 'model' (which reflects in
 yaml,  code, docs, talk, etc.)
input and output? (+1)
in and out? (-1)
request and response? (-1) good for WebServices but not generic enough
parameters and results? (ok)


 Could you please clarify your questions here? Not sure I'm following...

 4) Syntax simplification: can we drop 'parameters' keyword? Anything under
 action is action parameters, unless it's a reserved keyword, which the
 implementation can parse out.

 actions:
my-action
   foo: bar
   task-parameters: # not a keyword, specific to REST_API
   flavor_id:
   image_id:
   publish:  # keyword
   select: '$.server_id'
   store_as: v1


 It will create problems like name ambiguity in case we need to have a
 parameter with the same names as keywords (task-parameters and publish
 in your example). My preference would be to leave it explicit.

 Renat


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Best Regards,
Nikolay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Skipping next Project/Release meeting and 1:1s

2014-02-14 Thread Thierry Carrez
Hi PTLs,

I need to attend an unexpected family event on Tuesday, so I won't be
able to do our regular 1:1s, nor will I be able to chair the
Project/Release meeting on Feb 18th at 21:00 UTC.

Since the date coincides with FeatureProposalFreeze for a number of
projects, I suspect most of you won't really see clearly where you stand
until the end of the day anyway... So we'll catch up on individual
project status with informal 1:1s on Wednesday/Thursday.

Unless there is some urgent matter that needs to be discussed at the
cross-project meeting, I therefore propose we skip it for this Tuesday
and have the next one on February 25th.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vhost-{pid} takes up cpu

2014-02-14 Thread Nick Ma
You can run the command taskset -pc {pid} for both kvm guest process
and its vhost-{pid}. If they are not identical, you can change the
affinity to achieve NUMA/cache sharing.

Not sure it will solve the problem.

-- 

cheers,
Li Ma


On 2/14/2014 7:42 PM, Yongsheng Gong wrote:
 Hi deal stackers,

 I am running a devstack with two nodes:
 one is controller (no nova-compute running) and other is compute.

 I am using neutron ml2 plugin and ovs agent with GRE tunnel.

 I started a VM and tried to run iperf testing:
 1. start iperf as server role in the VM, which has a floating ip
 192.168.10.10 http://192.168.10.10:
 iperf -s

 2 start iperf as client role on the controler node to test the VM via
 floating ip:
 iperf -c 192.168.10.10 -t 120

 at the same time, run top on compute node, I found the vhost-{pid} was
 taking too much cpu resource ( more than 75%):

 Tasks: 230 total,   2 running, 228 sleeping,   0 stopped,   0 zombie
 %Cpu0  : 23.7 us,  9.1 sy,  0.0 ni, 66.9 id,  0.0 wa,  0.0 hi,  0.3
 si,  0.0 st
 %Cpu1  : 10.2 us,  2.7 sy,  0.0 ni, 86.3 id,  0.0 wa,  0.0 hi,  0.7
 si,  0.0 st
 %Cpu2  :  0.0 us, 19.8 sy,  0.0 ni,  5.2 id,  0.0 wa,  2.2 hi, 72.8
 si,  0.0 st
 %Cpu3  : 12.6 us,  4.2 sy,  0.0 ni, 83.2 id,  0.0 wa,  0.0 hi,  0.0
 si,  0.0 st
 KiB Mem:  12264832 total,  1846428 used, 10418404 free,43692 buffers
 KiB Swap:0 total,0 used,0 free,   581572 cached

   PID USER  PR  NI  VIRT  RES  SHR S  %CPU %MEMTIME+  COMMAND
   
  4073 root  20   0 000 R  79.8  0.0   3:26.97 vhost-4070  


 gongysh@gongysh-p6535cn:~$ ps -ef | grep 4070
 119   4070 1 31 19:24 ?00:04:13 qemu-system-x86_64
 -machine accel=kvm:tcg -name instance-0002 -S -M pc-i440fx-1.4 -m
 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid
 5cbfb914-d16c-4b51-a057-50f2da827830 -smbios
 type=1,manufacturer=OpenStack Foundation,product=OpenStack
 Nova,version=2014.1,serial=94b43180-e901-1015-b061-90cecbca80a3,uuid=5cbfb914-d16c-4b51-a057-50f2da827830
 -no-user-config -nodefaults -chardev
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0002.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc
 base=utc,driftfix=slew -no-hpet -no-shutdown -device
 piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
 file=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
 -device
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive
 file=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none
 -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1
 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device
 virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ce:da:ae,bus=pci.0,addr=0x3
 -chardev
 file,id=charserial0,path=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/console.log
 -device isa-serial,chardev=charserial0,id=serial0 -chardev
 pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1
 -vnc 0.0.0.0:0 http://0.0.0.0:0 -k en-us -vga cirrus -device
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


 This is a big performance problem, imagine almost all the cpu
 resources will be consumed by the vhost-{pid}s if we have many VMs and
 all are running with full speed network traffic.


 any ideas?

 regards,
 yong sheng gong



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Dirk Müller
Hi Robert,

 So far:
  - some packages use different usernames
  - some put things in different places (and all of them use different
 places to the bare metal ephemeral device layout which requires
 /mnt/).
  - possibly more in future.

Somehow I miss between your suggestions of option #A and #B the option
#C, which started the whole discussion ;-)

The whole discussion started on service usernames really. I don't
really see the problem with that though, you're logging in as root and
all you do is running systemctl start service. if that drops
privileges to user foo or to user bar is really something that can
be abstracted away.

 # A
  - we have a reference layout - install from OpenStack git / pypi
 releases; this is what we will gate on, and can document.

I assume that you mean this with the from source install. I don't
think that you really need to document each individual path or the
like. Most of the support install from packages changes were adding
symlinks to client libraries to /usr/local/bin, aka $PATH. As long as
all the binaries that you're as an operator to expect to call are in
$PATH, I see nothing wrong with just documenting the binary to be
available. I also think that the average OpenStack operator is able to
ignore the problem that if the documentation says /usr/local/bin/foo
and the binary is /usr/bin/foo. I strongly believe most of them will
not even notice.

This is something that can be tested btw, and used to certify
documentation against reality, be it install from source or packages.

  - we need multiple manuals describing how to operate and diagnose
 issues in such a deployment, which is a matrix that overlays platform
 differences the user selects like 'Fedora' and 'Xen'.

Shouldn't that be part of the normal OpenStack Operations Guide? I
don't see how TripleO needs to reinvent general OpenStack
documentation? The existing documentation already covers most of the
platform differences.

 # B
  - we have one layout, with one set of install paths, usernames
  - package installs vs source installs make no difference - we coerce
 the package into reference upstream shape as part of installing it.

Which is done anyway already (by relinking stuff to /mnt/state).

  - documentation is then identical for all TripleO installs, except
 the platform differences (as above - systemd on Fedora, upstart on
 Ubuntu, Xen vs KVM)

I think there is nothing wrong with requiring the documentation of
differences being updated
as part of such changes being introduced.

 I see this much like the way Nova abstracts out trivial Hypervisor
 differences to let you 'nova boot' anywhere, that we should be hiding
 these incidental (vs fundamental capability) differences.

Isn't that what the DIB elements do? you build a media with nova
element, and whatever platform you're building on, it will get you
nova up and running? Why would you need to document which user the
nova element runs under? This is an implementation detail.

In my opinion this boils down to: Tests for the documentation. If we
document that you can start a service
by running systemctl start foobar, then there gotta be a test for
it. What the code does to launch the service when systemctl start
foobar is run is an implementation detail, and if it requires running
user bar instead of user foo then so be it.

Installing from packages is not as bad as you might think. It brings
down image building time to less than half the time you need from
source, and you can apply patches that fix *your* problem on *your*
platform. I understand the idea of Continuous Deployment, but it
doesn't mean that the one patch you need to have in your system for
something to work isn't hanging in an upstream review for 3 months or
more. It also allows distributors to provide services on top of
OpenStack, something that pays the pay checks of some of the people in
the community.



Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver problems

2014-02-14 Thread Jay Lau
Cool, thanks Gary.

Do you have some bugs or bp filed in launchpad to trace those issues?


2014-02-14 17:11 GMT+08:00 Gary Kotton gkot...@vmware.com:

 Hi,
 We are currently looking into that.
 Thanks
 Gary

 From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Thursday, February 13, 2014 11:14 AM

 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver
 problems

 Thanks Gary.

 What about live migration with VCDriver, currently I cannot do live
 migration in the condition of between ESX servers in one cluster.

 2014-02-13 16:47 GMT+08:00 Gary Kotton gkot...@vmware.com:

 Hi,
 The commit
 https://github.com/openstack/nova/commit/c4bf32c03283cbedade9ab8ca99e5b13b9b86ccbhttps://urldefense.proofpoint.com/v1/url?u=https://github.com/openstack/nova/commit/c4bf32c03283cbedade9ab8ca99e5b13b9b86ccbk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lzrbNoejQXG1NI2jpS6g%2FYJanvcIezST4Uenp6Sd5BI%3D%0As=1a39cfb2c41b8ce978956de28a2773a98b75bcfe4c0f135905dc6aa3257b9570
  added
 a warning that the ESX driver is not tested. My understanding is that there
 are a number of people using the ESX driver so it should not be deprecated.
 In order to get the warning removed we will need to have CI on the driver.
 As far as I know there is no official decision to deprecate it.
 Thanks
 Gary

 From: Jay Lau jay.lau@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Thursday, February 13, 2014 4:00 AM
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [OpenStack][Nova][VCDriver] VMWare VCDriver
 problems

 Greetings,

 I was now doing some integration with VMWare VCDriver and have some
 questions during the integration work.

 1) In Hong Kong Summit, it was mentioned that ESXDriver will be dropped,
 so do we have any plan when to drop this driver?
 2) There are many good features in VMWare was not supportted by VCDriver,
 such as live migration, cold migration and resize within one vSphere
 cluster, also we cannot get individual ESX Server details via VCDriver.

 Do we have some planning to make those features worked?

 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devhttps://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=eH0pxTUZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0Am=lzrbNoejQXG1NI2jpS6g%2FYJanvcIezST4Uenp6Sd5BI%3D%0As=b1d6c73107a271d9b3e2c6948bb4bc32185a964d0af83eb60a510d180a4d75f6




 --
 Thanks,

 Jay

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Interested in attracting new contributors?

2014-02-14 Thread Luis de Bethencourt
On 14 February 2014 07:06, Victoria Martínez de la Cruz 
victo...@vmartinezdelacruz.com wrote:

 Hi all,

 I think we should separate between mentoring efforts and documentation.

 For the first one, is true that there are always tons of tasks to do but
 sometimes a beginner won't find them because they are not familiar with the
 workflow neither with the community. People that is not involved with
 open-source just doesn't know where to start when reaching to an
 organization of this kind. So, having someone to ask about this is usually
 a great help. That is something OpenHatch is intended to solve.

 I must say that OpenHatch is not time consuming and helped several people
 to join OpenStack and start contributing to it. And that is great :)

 Another thing we tried last year is #openstack-101, a channel for new
 contributors where they are free to ask any doubt they could have. I'm
 happy to say this worked as a hub between newcomers and the community and
 that lots of people have been able to start working with us.

 About documentation, I agree that it could be quite overwhelming the first
 time, but that is how complex our organization is. The thing is... we get
 used to that.

 Perhaps we could ask new contributors what they would like to find in the
 'How to contribute' wiki page and refine it to make it easier for new
 people (I volunteer for that!).

 Finally, I wanted to mention that mentoring is great. I still have many
 things to learn, but I have been able to guide people with their first
 steps in the community and is cool to see how a little effort like that
 mean, later in time, great contributions.

 Thanks all for the feedback,

 Victoria


It has probably been mentioned before and I missed it but, where can people
find these mentors you talk about? This should be mentioned in pages like:
https://wiki.openstack.org/wiki/HowToContribute
https://wiki.openstack.org/wiki/DevQuickstart
https://swiftstack.com/blog/2013/02/12/swift-for-new-contributors/




 2014-02-13 23:46 GMT-03:00 Luis de Bethencourt l...@debethencourt.com:

 On 13 February 2014 21:09, Jeremy Stanley fu...@yuggoth.org wrote:

 On 2014-02-12 14:42:17 -0600 (-0600), Dolph Mathews wrote:
 [...]
  There's a lot of such scenarios where new contributors can
  quickly find things to contribute, or at lest provide incredibly
  valuable feedback to the project in the form of reviews!
 [...]

 I heartily second the suggestion. The biggest and best thing I did
 as a new contributor was to start reviewing changes first thing. An
 initial contributor, if they have any aptitude for software
 development at all, will be able to tell a ton about our development
 community by how it interacts through code review. The test-centric
 methodology, style guidelines and general level of
 acceptance/tolerance for various things become immediately apparent.
 You also get to test your understanding of the source by watching
 all the mistakes other reviewers find that you missed in your
 reviewing. Refine and repeat.

 Getting a couple of very simple changes in right away also helps you
 pick up the workflow and toolset, but reviewing others changes is a
 huge boon to both the project and the would-be contributors doing
 the reviewing... much more so than correcting a handful of
 typographical errors.
 --
 Jeremy Stanley



 That is a very good idea Jeremy.

 I started learning and contributing to OpenStack yesterday. I have been
 writing down all the things I do, read and discover. Planning to blog about
 and share it. I think it would be valuable to show how to contribute and
 learn the project from the point of view of a novice to it.

 Cheers,
 Luis


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] bp proposal: filter based on the load averages of the host

2014-02-14 Thread sahid
Greetings,

I would like to add a new filter based on the load averages.

This filter will use the command uptime and will provides an option to choice a
period between 1, 5, and 15 minutes and an option to choice the max load
average (a float between 0 and 1).

Why:
  During a scheduling it could be useful to exclude a host that have a too
heavy load and the command uptime (available in all linux system) 
can return a load average of the system in different periods.

About the implementation:
  Currently 'all' drivers (libvirt, xenapi, vmware) supports a method
get_host_uptime that returns the output of the command 'uptime'. We have to add
in compute/stats.py a new method calculate_loadavg() that returns based on the
output of driver.get_host_uptime() from compute/ressource_tracker.py a well
formatted tuple of load averages for each periods. We also need to update
api/openstack/compute/contrib/hypervisors.py to take care of this new
field.

  The implementation will be divided in several parts:
* Add to host_manager the possibility to get the loads_averages
* Implement the filter based on this new property
* Implement the filter with a per-aggregate configuration

The blueprint: https://blueprints.launchpad.net/nova/+spec/filter-based-uptime

I will be happy to get any comments about this filter, perharps it is not 
implemented
yet because of something I didn't see or my thinking of the implementation is 
wrong.

PS: I have checked metrics and cpu_resource but It does not get an averages of 
the
system load or perhaps I have not understand all.

Thanks a lot,
s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: filter based on the load averages of the host

2014-02-14 Thread Eric Brown
This will probably not work for the vmware driver since it does not return a 
standard uptime string.  Here's an example of what you would get:

$ nova hypervisor-uptime 1
+-++
| Property| Value  |
+-++
| hypervisor_hostname | domain-c7(OpenStackCluster)|
| id  | 1  |
| uptime  | Please refer to 10.0.7.1 for the uptime |
+-++


On Feb 14, 2014, at 10:29 AM, sahid sahid.ferdja...@cloudwatt.com wrote:

 Greetings,
 
 I would like to add a new filter based on the load averages.
 
 This filter will use the command uptime and will provides an option to choice 
 a
 period between 1, 5, and 15 minutes and an option to choice the max load
 average (a float between 0 and 1).
 
 Why:
  During a scheduling it could be useful to exclude a host that have a too
 heavy load and the command uptime (available in all linux system) 
 can return a load average of the system in different periods.
 
 About the implementation:
  Currently 'all' drivers (libvirt, xenapi, vmware) supports a method
 get_host_uptime that returns the output of the command 'uptime'. We have to 
 add
 in compute/stats.py a new method calculate_loadavg() that returns based on the
 output of driver.get_host_uptime() from compute/ressource_tracker.py a well
 formatted tuple of load averages for each periods. We also need to update
 api/openstack/compute/contrib/hypervisors.py to take care of this new
 field.
 
  The implementation will be divided in several parts:
* Add to host_manager the possibility to get the loads_averages
* Implement the filter based on this new property
* Implement the filter with a per-aggregate configuration
 
 The blueprint: 
 https://urldefense.proofpoint.com/v1/url?u=https://blueprints.launchpad.net/nova/%2Bspec/filter-based-uptimek=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=2CQc966BQ6s3Cdd6nQ79uvWP17nF9g%2FX4m3XppGg1xQ%3D%0Am=%2F2SxxIStRB5OioDQormYLfKPx3nAoxbjqTWRezbId4U%3D%0As=1c937276a79b45279a009231915f5cac684c565b7b6b1e1cd40c659404f83a03
 
 I will be happy to get any comments about this filter, perharps it is not 
 implemented
 yet because of something I didn't see or my thinking of the implementation is 
 wrong.
 
 PS: I have checked metrics and cpu_resource but It does not get an averages 
 of the
 system load or perhaps I have not understand all.
 
 Thanks a lot,
 s.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-devk=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0Ar=2CQc966BQ6s3Cdd6nQ79uvWP17nF9g%2FX4m3XppGg1xQ%3D%0Am=%2F2SxxIStRB5OioDQormYLfKPx3nAoxbjqTWRezbId4U%3D%0As=78d752c782f43b4e3ae29988f60c7cf1dc743190bddd55e732c73c378fb12e3d

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Jay Dobies

On Fri, Feb 14, 2014 at 10:27:20AM +1300, Robert Collins wrote:

So progressing with the 'and folk that want to use packages can' arc,
we're running into some friction.

I've copied -operators in on this because its very relevant IMO to operators :)

So far:
  - some packages use different usernames
  - some put things in different places (and all of them use different
places to the bare metal ephemeral device layout which requires
/mnt/).
  - possibly more in future.

Now, obviously its a 'small matter of code' to deal with this, but the
impact on ops folk isn't so small. There are basically two routes that
I can see:

# A
  - we have a reference layout - install from OpenStack git / pypi
releases; this is what we will gate on, and can document.
  - and then each distro (both flavor of Linux and also possibly things
like Fuel that distribution OpenStack) is different - install on X,
get some delta vs reference.
  - we need multiple manuals describing how to operate and diagnose
issues in such a deployment, which is a matrix that overlays platform
differences the user selects like 'Fedora' and 'Xen'.

I agree with what James already said here. It probably not TripleO's job to
document all that.  A good documentation for the reference layout should be the
goal.

And currently the differences aren't all that big I think. And for some of them
we already have good solutions (like e.g. the os-svc-* tools). There is room
for improvement in handling of the differences for usernames, though :)


# B
  - we have one layout, with one set of install paths, usernames
  - package installs vs source installs make no difference - we coerce
the package into reference upstream shape as part of installing it.

Unless I am completely missunderstanding your proposal I think this would void
many of the reasons why people would choose to install from packages in the
first place.


  - documentation is then identical for all TripleO installs, except
the platform differences (as above - systemd on Fedora, upstart on
Ubuntu, Xen vs KVM)

B seems much more useful to our ops users - less subtly wrong docs, we
avoid bugs where tools we write upstream make bad assumptions,
experience operating a TripleO deployed OpenStack is more widely
applicable (applies to all such installs, not just those that happened
to use the same package source).

I am propably repeating much of what James already said already. But I think an
operator that makes the decision to do a package base Triplo installation does
so e.g. because he is familiar with the tools and conventions of the specific
distro/provider of the packages he choose. And probably wants TripleO to be
consistent with that. And yes, with the decision for packages, he decides to
differ from the reference layout.


I agree with the notions that admins have come to expect differences 
from distro to distro. It's the case for any number of services.


I'd go beyond that and say you're going to have problems getting the 
packages accepted/certified if they break the typical distro 
conventions. There are guidelines that say where things like Python code 
must live and packages may not even be accepted if they violate those.


The same is likely for the admins themselves, taking issue if the 
packages don't match their expectation criteria for the distro.



I see this much like the way Nova abstracts out trivial Hypervisor
differences to let you 'nova boot' anywhere, that we should be hiding
these incidental (vs fundamental capability) differences.

What say ye all?

-Robv




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-02-14 Thread Dina Belova
Thanks everyone who took part in our weekly meeting. It is real pleasure to
work with you, folks!

Meeting minutes are here:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-14-15.04.html

Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-14-15.04.txt

Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-02-14-15.04.log.html

Thanks!


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Horizon] switchable switched classes in forms.py

2014-02-14 Thread Abishek Subramanian (absubram)
Hi,

I can see that the forms.py file can have parameters/inputs that are of class 
type switchable switched, i.e. this field 'B' will appear on the form if it's 
'switched' based on a choice from a previous field 'A'. Then depending on the 
choice made for 'B', field 'C' will appear on the form.

I want to now find out if the 'choices' for the forms field 'B' can also be 
switched. i.e depending on the choice made in field 'A', the choices that will 
appear for 'B' will change. Is this doable?


Thanks!
Abishek
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-14 Thread punal patel
I am interested. UTC - 8.


On Fri, Feb 14, 2014 at 1:48 AM, Nick Ma skywalker.n...@gmail.com wrote:

 I'm also interested in it. UTC8.

 --

 cheers,
 Li Ma


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-14 Thread Renat Akhmerov
“output” looks nice!


Renat Akhmerov
@ Mirantis Inc.

On 14 Feb 2014, at 20:26, Nikolay Makhotkin nmakhot...@mirantis.com wrote:

 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results' 
   select: '$.server_id'  
   store_as: v1
 
 'result' sounds better than 'response' and, I think, more fit to action 
 description.
 And I suggest for a new word - 'output'; actually, this block describe how 
 the output will be taken and stored.
 
 However, I agree that this block should be at action-property level:
 
 actions:
my-action
   result: 
  select: '$.server_id'  
  store_as: vm_id
   parameters:
  foo: bar
   
 
 
 On Fri, Feb 14, 2014 at 12:36 PM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 14 Feb 2014, at 15:02, Dmitri Zimine d...@stackstorm.com wrote:
 
 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results’ 
 
 Just a note: “response” indentation here is not correct, it’s not a parameter 
 called “response” but rather a property of “my-action”.
 
   select: '$.server_id'  
   store_as: v1
 
 In the code, we refer to action.result_helper
 
 1) Note that response is not exactly a parameter. It doesn't doesn't refer 
 to data. It's  (query, variable) pairs, that are used to parse the results 
 and post data to global context [1]. The terms response, or result, do not 
 reflect what is actually happening here. Suggestions? Save? Publish? Result 
 Handler? 
 
 For explicitness we can use something like “result-handler” and initially I 
 thought about this option. But I personally love conciseness and I think name 
 “result” would be ok for this section meaning it defines the structure of the 
 action result. “handler” is not 100% precise too because we don’t actually 
 handle a result here, we define the rules how to get this result.
 
 I would appreciate to see other suggestions though.
 
 2) Whichever name we use for this output transformer, shall it be under 
 parameters?
 
 No, what we have in this section is like a return value type for a regular 
 method. Parameters define action input.
 
 3) how do we call action/task parameters? Think 'model' (which reflects in 
 yaml,  code, docs, talk, etc.)
input and output? (+1)
in and out? (-1)  
request and response? (-1) good for WebServices but not generic enough
parameters and results? (ok)
 
 Could you please clarify your questions here? Not sure I’m following...
 
 4) Syntax simplification: can we drop 'parameters' keyword? Anything under 
 action is action parameters, unless it's a reserved keyword, which the 
 implementation can parse out. 
 
 actions:
my-action
   foo: bar
   task-parameters: # not a keyword, specific to REST_API
   flavor_id:
   image_id:
   publish:  # keyword
   select: '$.server_id'  
   store_as: v1
 
 It will create problems like name ambiguity in case we need to have a 
 parameter with the same names as keywords (“task-parameters” and “publish” in 
 your example). My preference would be to leave it explicit.
 
 Renat
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-14 Thread Jay Pipes
On Fri, 2014-02-14 at 08:54 +0900, Akihiro Motoki wrote:
 Hi,
 
 I wrote a blog post about how to setup Zuul manually.
 http://ritchey98.blogspot.jp/2014/02/openstack-third-party-testing-how-to.html
 
 It covers how to migrate from Gerrit Trigger plugin to Zuul and
 some tips including a way to define vendor-specific recheck trigger
 in addition to the setup procedure.
 
 Jay's puppet manifest is nice, but I hope the manual installation step
 is also helpful to set up 3rd party testing.

Thanks Akihiro-san,

I just put up the second article in the series that walks through
installation of a Jenkins master, Zuul, and Jenkins Job Builder and
testing communication with upstream:

http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/

Feedback very much welcomed. Working on the slave setup article now...

Best,
-jay

 On Fri, Feb 14, 2014 at 5:39 AM, Jay Pipes jaypi...@gmail.com wrote:
  On Thu, 2014-02-13 at 12:34 -0800, Sukhdev Kapur wrote:
  Jay,
 
  Just an FYI. We have modified the Gerrit plugin it accept/match regex
  to generate notifications of for receck no bug/bug ###. It turned
  out to be very simple fix and we (Arista Testing) is now triggering on
  recheck comments as well.
 
  Thanks for the update, Sukhdev! Is this updated Gerrit plugin somewhere
  where other folks can use it?
 
  I've got Zuul actually working pretty well in my os-ext-testing repo
  now. Only problem remaining is with the Jenkins slave trigger (not
  related to Gerrit...)
 
  Best,
  -jay
 
 
 
  ___
  OpenStack-Infra mailing list
  openstack-in...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week of January, Montreal, QC, Canada

2014-02-14 Thread Anita Kuno
Time for a summary:

Neutron Tempest Code Sprint Summary
Where: Montreal, QC, Canada
When: January 15, 16, 17 2014
Location: Salle du Parc at 3625 Parc avenue, a room in a residence of
McGill University.
Time: 9am - 5pm (8am - 6pm for some)

Purpose:
to focus on the stability of Neutron
to address gaps in Neutron testing
to help Neutron developers recognize other Neutron developers by
face and name

General:
The code sprint was very successful in stimulating conversation
between developers. Having the time to exchange ideas and having access
to others who were able to discuss issues, either long standing or new,
was valuable for supporting a sense of collaborative group work. The
venue was supportive in that it was quiet and requirements were
unobtrusive and available so that people could focus on their work. The
wifi gave us a hiccup since the port gerrit uses probably alerted a
system firewall, but work arounds were found and work continued.

Stats:
23 developers, 3 days

Group Size:
We had 23 people attend, but immediate split into multiple teams
focused on predeterimined areas of stability and testing.  We only held
one large group session to solicit feedback on the structure of the
meetup.  The general feeling is that the maximum group size should not
exceed 30 to ensure efficiency for all that are there.

Location:
Canada: we had one person ask for a letter of invitation in time to
receive it to get a visa, he was granted a visa and attended - he did
tell me that he has requested a visa to travel to the US previously and
been denied - this person has recently been proposed as Neutron core
Quebec: I don't recall any conversations anyone had about having
difficulty with language issues
Montreal: it has an international airport, reasonable prices for
food and accommodation, the weather was not as bad as I had thought it
might have been, noone took off to be tourist since the days were all
rather grey

Outcomes:
Some specific code items were addressed [0]

Changes to support isolated testing were completed

Improvements to enable parallel testing Improvements

Significant improvement in Tempest API test cases

Work to enable grenade testing to validate upgrade

Some developers who had never met in person previously had an
opportunity to sit down with others and work together.
The build up to the event helped to bring some focus to the importance
of testing in Neutron. It also brought some attention to how isolated
some Neutron developers are from the other OpenStack projects with an
eye to addressing this isolation - even how isolated some developers are
with the rest of Neutron.

I felt that contributors to other projects were willing to spend more
time than they had been, supporting Neutron's efforts to address testing
gaps, due to the fact that there was a commitment to doing so that had a
very public focus.

One interesting outcome was the number of people who responded
favourably to the question on the evaluation [1] about approaching their
managers to request more time to work on Neutron upstream. I consider
that favourable outcome. I have no way of knowing if anyone actually did
request more time of their manager, and if they did ask, what response
they received.

I think the code sprint represented a target of stability in a release
focused on stability. I think it was a talking point that had some
positive outcomes itself but also was the catalyst for other discussions
for Neutron in Icehouse as a whole.

[0] https://etherpad.openstack.org/p/montreal-code-sprint
[1] https://etherpad.openstack.org/p/montreal-code-sprint-evaluation

Next an apology for me. Just after code sprint, I was injured and didn't
realize it for about 2 weeks, so in the midst of being in pain and then
working on healing myself, I didn't get this summary out in the time
frame I had hoped to get the summary out. I apologize to those folks who
needed this summary earlier. I hope this information is still useful now.

My thanks to all those who made this happen. To the Foundation, to all
the PTLs from other projects who supported this event and to all those
who attended and the managers who supported that attendance. And to Mark
McClain and Salvatore Orlando, thank you both so much.

Thank you,
Anita.


 -Original Message-
 From: Anita Kuno [mailto:ante...@anteaya.info] 
 Sent: Friday, December 27, 2013 18:00
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Neutron] Neutron Tempest code sprint - 2nd week 
 of January, Montreal, QC, Canada
 
 On 12/18/2013 04:17 PM, Anita Kuno wrote:
 Okay time for a recap.

 What: Neutron Tempest code sprint
 Where: Montreal, QC, Canada
 When: January 15, 16, 17 2014
 Location: I am about to sign the contract for Salle du Parc at 3625 
 Parc avenue, a room in a residence of McGill University.
 Time: 9am - 5am
 Time: 9am - 5pm

 I am expecting to see the following people in Montreal in January:
 Mark McClain
 

Re: [openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-14 Thread Dmitri Zimine
I like output, too. But it should go with 'input'
In summary, there are two alternatives. 
Note that I moved task-parameters under parameters. Ok with this?

actions:
   my-action
  input:
 foo: bar
 task-parameters: 
flavor_id:
image_id:
  output: 
  select: '$.server_id'  
  store_as: v1

this maps to action(input, *output)

actions:
   my-action
  parameters:
 foo: bar
 task-parameters: 
flavor_id:
image_id:
  result: 
  select: '$.server_id'  
  store_as: v1

this maps to result=action(parameters)


On Feb 14, 2014, at 8:40 AM, Renat Akhmerov rakhme...@mirantis.com wrote:

 “output” looks nice!
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 On 14 Feb 2014, at 20:26, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results' 
   select: '$.server_id'  
   store_as: v1
 
 'result' sounds better than 'response' and, I think, more fit to action 
 description.
 And I suggest for a new word - 'output'; actually, this block describe how 
 the output will be taken and stored.
 
 However, I agree that this block should be at action-property level:
 
 actions:
my-action
   result: 
  select: '$.server_id'  
  store_as: vm_id
   parameters:
  foo: bar
   
 
 
 On Fri, Feb 14, 2014 at 12:36 PM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 14 Feb 2014, at 15:02, Dmitri Zimine d...@stackstorm.com wrote:
 
 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results’ 
 
 Just a note: “response” indentation here is not correct, it’s not a 
 parameter called “response” but rather a property of “my-action”.
 
   select: '$.server_id'  
   store_as: v1
 
 In the code, we refer to action.result_helper
 
 1) Note that response is not exactly a parameter. It doesn't doesn't refer 
 to data. It's  (query, variable) pairs, that are used to parse the results 
 and post data to global context [1]. The terms response, or result, do not 
 reflect what is actually happening here. Suggestions? Save? Publish? Result 
 Handler? 
 
 For explicitness we can use something like “result-handler” and initially I 
 thought about this option. But I personally love conciseness and I think 
 name “result” would be ok for this section meaning it defines the structure 
 of the action result. “handler” is not 100% precise too because we don’t 
 actually handle a result here, we define the rules how to get this result.
 
 I would appreciate to see other suggestions though.
 
 2) Whichever name we use for this output transformer, shall it be under 
 parameters?
 
 No, what we have in this section is like a return value type for a regular 
 method. Parameters define action input.
 
 3) how do we call action/task parameters? Think 'model' (which reflects in 
 yaml,  code, docs, talk, etc.)
input and output? (+1)
in and out? (-1)  
request and response? (-1) good for WebServices but not generic enough
parameters and results? (ok)
 
 Could you please clarify your questions here? Not sure I’m following...
 
 4) Syntax simplification: can we drop 'parameters' keyword? Anything under 
 action is action parameters, unless it's a reserved keyword, which the 
 implementation can parse out. 
 
 actions:
my-action
   foo: bar
   task-parameters: # not a keyword, specific to REST_API
   flavor_id:
   image_id:
   publish:  # keyword
   select: '$.server_id'  
   store_as: v1
 
 It will create problems like name ambiguity in case we need to have a 
 parameter with the same names as keywords (“task-parameters” and “publish” 
 in your example). My preference would be to leave it explicit.
 
 Renat
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-02-14 Thread Vishvananda Ishaya
Hi Vinod!

I think you can simplify the roles in the hierarchical model by only passing 
the roles for the authenticated project and above. All roles are then inherited 
down. This means it isn’t necessary to pass a scope along with each role. The 
scope is just passed once with the token and the project-admin role (for 
example) would be checking to see that the user has the project-admin role and 
that the project_id prefix matches.

There is only one case that this doesn’t handle, and that is when the user has 
one role (say member) in ProjA and project-admin in ProjA2. If the user is 
authenticated to ProjA, he can’t do project-adminy stuff for ProjA2 without 
reauthenticating. I think this is a reasonable sacrifice considering how much 
easier it would be to just pass the parent roles instead of going through all 
of the children.

Vish

On Feb 13, 2014, at 2:31 AM, Vinod Kumar Boppanna 
vinod.kumar.boppa...@cern.ch wrote:

 Dear All,
 
 At the meeting last week we (myself and Ulrich) have been assigned the task 
 of doing POC for Quota Management in the Hierarchical Multitenancy setup. 
 
 So, here it is:
 
 Wiki Page - https://wiki.openstack.org/wiki/POC_for_QuotaManagement   
 (explained here an example setup and my thoughts)
 
 Code - 
 https://github.com/vinodkumarboppanna/POC-For-Quotas/commit/391e9108fa579d292880c8836cadfd7253586f37
 
 Please post your comments or any inputs and i hope this POC will be discussed 
 in this weeks meeting on Friday at 1600 UTC.
 
 
 In addition to this, we have completed the implementation the Domain Quota 
 Management in Nova with V2 APIs, and if anybody interested, please have a look
 
 BluePrint - 
 https://blueprints.launchpad.net/nova/+spec/domain-quota-driver-api
 Wiki Page - https://wiki.openstack.org/wiki/APIs_for_Domain_Quota_Driver
 GitHub Code - https://github.com/vinodkumarboppanna/DomainQuotaAPIs
 
 
 Thanks  Regards,
 Vinod Kumar Boppanna
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Devstack installation failed with CINDER installation

2014-02-14 Thread Asselin, Ramy
Jeremy,

Thanks for the clarification. 

I found an alternative workaround that works for me and restricts the use of 
the openstack pypi mirror to only be used when running tox:

1. Add PyPi mirror to tox.ini's [tox] entry:

+indexserver =
+default = http://pypi.openstack.org/openstack

e.g.

[tox]
minversion = 1.6
skipsdist = True
envlist = py26,py27,py33,pep8
indexserver =
default = http://pypi.openstack.org/openstack


Ramy

Reference:  
http://stackoverflow.com/questions/16471269/how-to-tell-tox-to-use-pypi-mirrors-for-installing-packages


-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Thursday, February 13, 2014 6:44 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Devstack installation failed with CINDER 
installation

On 2014-02-13 10:56:28 -0600 (-0600), Ben Nemec wrote:
[...]
 configure pip to use the pypi.openstack.org mirror.
[...]

While this is sometimes a useful hack for working around intermittent PyPI CDN 
growing pains on your personal development workstation, or maybe for ferreting 
out whether your local tests are getting different results because of varied 
package set between PyPI and our mirror, I fear that some people reading this 
might assume it's a stable public service and encode it into production 
configuration.

The pypi.openstack.org mirror is just a single VM, while pypi.python.org has 
CDN services fronting it for improved reachability, reliability and 
scalability. In fact, pypi.openstack.org resides on the same 
single-point-of-failure VM which also provides access to build logs and lots of 
other data.
It's intended mostly as a place for our automated build systems to look for 
packages so as not to hammer actual PyPI constantly and to provide us an 
additional layer of control over what we test with. It is *not* secure. Let me 
reiterate that point. It is for test jobs, so the content is served via plain 
unencrypted HTTP *only* and is therefore easily modified by a man-in-the-middle 
attack. It's also not guaranteed to be around indefinitely, or to necessarily 
be reachable outside the cloud provider networks where testing is performed, or 
to carry all the packages you may need, or to have enough bandwidth available 
to serve the entire user base, or to be up and on line 100% of the time, or...

...you get the idea.
--
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Glance v1 and v2

2014-02-14 Thread Pete Zaitcev
Hello:

does anyone happen to know, or have a detailed write-up, on the
differences between so-called Glance v1 and Glance v2?

In particular do we still need Glance Registry in Havana, or
do we not? The best answer so far was to run the registry anyway,
just in case, which does not feel entirely satisfactory.
Surely someone should know exactly what is going on in the API
and have a good idea what the implications are for the users
of Glance (API, CLI, and Nova (I include Horizon into API)).

Thanks,
-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Clint Byrum
Excerpts from Robert Collins's message of 2014-02-13 13:27:20 -0800:
 So progressing with the 'and folk that want to use packages can' arc,
 we're running into some friction.
 
 I've copied -operators in on this because its very relevant IMO to operators 
 :)
 
 So far:
  - some packages use different usernames
  - some put things in different places (and all of them use different
 places to the bare metal ephemeral device layout which requires
 /mnt/).
  - possibly more in future.
 
 Now, obviously its a 'small matter of code' to deal with this, but the
 impact on ops folk isn't so small. There are basically two routes that
 I can see:
 
 # A
  - we have a reference layout - install from OpenStack git / pypi
 releases; this is what we will gate on, and can document.
  - and then each distro (both flavor of Linux and also possibly things
 like Fuel that distribution OpenStack) is different - install on X,
 get some delta vs reference.
  - we need multiple manuals describing how to operate and diagnose
 issues in such a deployment, which is a matrix that overlays platform
 differences the user selects like 'Fedora' and 'Xen'.
 

Having read the responses, I'm inclined to support A, with some
additional requirements:

* Reference implementations are always derided as not realistic. I think
  we need to think of a different term. I prefer to just say that this
  is the upstream implementation. We very much expect that a cloud can
  and should operate with this model unmodified. Packagers are doing so
  to fit OpenStack into a greater support model, not because nobody
  would ever want to run upstream. I think of how a large portion of
  MySQL users tend to just run upstream's binaries. They don't call this
  the reference binaries.

* Documentation can be split into an architecture guide which should be
  a single source of truth and document interfaces only, and an operations
  guide, which will focus on the upstream operations story. Distros should
  provide sub-sections for that story to document their differences.
  They should not, however, be putting distro specific interfaces in the
  architecture documentation, and we would reject such things until they
  are known to work upstream.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] VPC Proposal

2014-02-14 Thread Martin, JC

There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
AWS VPC api. I don't think that this blueprint is providing the necessary 
constructs to really implement a VPC, and it is not taking into account the 
domains, or proposed multi tenant hierarchy. In addition, I could not find a 
discussion about this topic leading to the approval.

For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
discussion on how to really implement VPC, and eventually split it into 
multiple real blueprints for each area.

Please, provide feedback on the following document, and on the best way to move 
this forward.

https://wiki.openstack.org/wiki/Blueprint-VPC

Thanks,

JC.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All] Fixed recent gate issues

2014-02-14 Thread John Dickinson
As many of you surely noticed, we had some significant significant
gate issues in the last day. It's fixed now, and I've got the details below.

The root cause of the issue was a lack of proper testing in python-
swiftclient. We've made some improvements here in the last few hours,
but improving this will be a longer-term effort (and one that is being
prioritized).

Here's what happened: In order to get support for TLS certificate
validation, swiftclient was ported to use the Python requests library.
This is a good change, overall, but it was merged with a bug where the
object data was uploaded as a multipart/form-data instead of as the
raw data itself. This issue was resolved with patch
https://review.openstack.org/#/c/73585/. The gate is currently stable,
everyone should be unblocked by this issue now. If you have a patch
that failed a check or gate run, you should recheck/reverify with bug
#1280072.

This, of course, raises the question of how this was allowed to
happen. First, there is a lack of functional testing in swiftclient
itself. (And, mea culpa, I should have done better testing before I
merged the initial breaking patch.) These tests are being prioritized
and worked on now.

Second, python-swiftclient did not have a symmetric gate with the
other projects that depend upon it. Although the gate change to make
this happen was proposed quite a while ago, it wasn't merged until
just this morning (https://review.openstack.org/#/c/70378/). Having
these tests earlier should have caught the issues in the original
python-swiftclient patch. Now that it has landed, there is much less
risk of such a problem happening again.

I want to thank Jeremy Stanley on the infra team for helping getting
these patches landed quickly. I'd also like to thank Tristan Cacqueray
for helping get the fixes written for python-swiftclient.

--John


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] python-swiftclient releases

2014-02-14 Thread John Dickinson
I'm pleased to announce a couple of big releases for python-swiftclient:
versions 1.9.0 and 2.0.2. You can find them both on PyPI:

https://pypi.python.org/pypi/python-swiftclient/2.0.2
https://pypi.python.org/pypi/python-swiftclient/1.9.0

So why the two releases? The 2.0.2 release is the result of migrating to
the Python requests library. The 1.9.0 release is the final release of
the 1.X series and includes all unreleased changes before the port to
requests. Below is a summary of the changes included in 1.9.0 and 2.0.2.

1.9.0 new features:

* Add parameter --object-name, which:
1) Sets target object name when upload single file
2) Sets object prefix when upload a directory

* Add capabilities command
This option uses the new /info endpoint to request the
remote capabilities and nicely display it.

* Allow custom headers when using swift download (CLI)
A repeatable option, --header or -H, is added so a user can specify
custom headers such as Range or If-Modified-Since when downloading
an object with the swift CLI.

2.0.2 new features and important info:

* Ported to use the requests library to support TLS/SSL certificate
  validation. The certificate validation changes the interpretation
  and usage of the --insecure option.

Usage of the requests library has two important caveats:

1) SSL compression is no longer settable with the
--no-ssl-compression option. The option is preserved as a
no-op for client compatibility. SSL compression is set by the
system SSL library.

2) The requests library does not currently support Expect
100-continue requests on upload. Users requiring this feature
should use python-swiftclient 1.9.0 until requests support this
feature or use a different API wrapper.

Please pay special attention to these changes. There are no plans to
maintain ongoing development on the 1.X series. All future work,
including support for Python 3, will happen in the 2.X series.

I'd also like to explicitly thank the eNovance development team,
especially Tristan Cacqueray, Christian Schwede, and Chmouel Boudjnah,
for their work in these releases. In addition to several smaller
features, they led the effort to port python-swiftclient to the
requests library.

Note: 2.0.2 and not simply 2.0 is because of a bug that was
discovered after 2.0 was tagged. See
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027172.html
for details.

--John


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Fixed recent gate issues

2014-02-14 Thread Anita Kuno
On 02/14/2014 02:15 PM, John Dickinson wrote:
 As many of you surely noticed, we had some significant significant 
 gate issues in the last day. It's fixed now, and I've got the
 details below.
 
 The root cause of the issue was a lack of proper testing in
 python- swiftclient. We've made some improvements here in the last
 few hours, but improving this will be a longer-term effort (and one
 that is being prioritized).
 
 Here's what happened: In order to get support for TLS certificate 
 validation, swiftclient was ported to use the Python requests
 library. This is a good change, overall, but it was merged with a
 bug where the object data was uploaded as a multipart/form-data
 instead of as the raw data itself. This issue was resolved with
 patch https://review.openstack.org/#/c/73585/. The gate is
 currently stable, everyone should be unblocked by this issue now.
 If you have a patch that failed a check or gate run, you should
 recheck/reverify with bug #1280072.
 
 This, of course, raises the question of how this was allowed to 
 happen. First, there is a lack of functional testing in
 swiftclient itself. (And, mea culpa, I should have done better
 testing before I merged the initial breaking patch.) These tests
 are being prioritized and worked on now.
 
 Second, python-swiftclient did not have a symmetric gate with the 
 other projects that depend upon it. Although the gate change to
 make this happen was proposed quite a while ago, it wasn't merged
 until just this morning (https://review.openstack.org/#/c/70378/).
 Having these tests earlier should have caught the issues in the
 original python-swiftclient patch. Now that it has landed, there is
 much less risk of such a problem happening again.
 
 I want to thank Jeremy Stanley on the infra team for helping
 getting these patches landed quickly. I'd also like to thank
 Tristan Cacqueray for helping get the fixes written for
 python-swiftclient.
 
 --John
 
 
 
 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
Thank you for the comprehensive write up, John.

Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-14 Thread Tiwari, Arvind
Hi JC,

I have proposed BP to address VPC using domain hierarchy and hierarchical 
administrative boundary.

https://blueprints.launchpad.net/keystone/+spec/hierarchical-administrative-boundary


Thanks,
Arvind
-Original Message-
From: Martin, JC [mailto:jch.mar...@gmail.com] 
Sent: Friday, February 14, 2014 12:09 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] VPC Proposal


There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
AWS VPC api. I don't think that this blueprint is providing the necessary 
constructs to really implement a VPC, and it is not taking into account the 
domains, or proposed multi tenant hierarchy. In addition, I could not find a 
discussion about this topic leading to the approval.

For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
discussion on how to really implement VPC, and eventually split it into 
multiple real blueprints for each area.

Please, provide feedback on the following document, and on the best way to move 
this forward.

https://wiki.openstack.org/wiki/Blueprint-VPC

Thanks,

JC.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-14 Thread Greg C
On Thu, Feb 13, 2014 at 6:38 AM, David Kranz dkr...@redhat.com wrote:


 Defaults in conf files seem to be one of the following:

 - Generic, appropriate for most situations
 - Appropriate for devstack
 - Appropriate for small, distro-based deployment
 - Approprate for large deployment


 In my experiences in creating OpenStack production systems, it appears the
answer is mostly Appropriate for devstack.  I haven't used devstack
myself, only created production systems from Openstack releases.  For
practically every openstack component the message queue config is missing,
and every api-paste.ini needs a [filter:authtoken] section for keystone.

It appears to me that those things are somehow covered when using
devstack.  Besides having to be added, the pitfall this creates is that
documentation for releases will not point out that they need to be added
and configured, because somehow devstack doesn't require it.

I use a VLAN model of networking as well, which as far as I can tell
devstack doesn't test/support, so I have to chase down a bunch of other
config items that are missing and scarcely documented.  The whole thing is
quite a chore.  I don't know why those common keystone and message queue
configs can't be in there from the start.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-14 Thread Joe Gordon
On Fri, Feb 14, 2014 at 12:09 PM, Martin, JC jch.mar...@gmail.com wrote:

 There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
 AWS VPC api. I don't think that this blueprint is providing the necessary 
 constructs to really implement a VPC, and it is not taking into account the 
 domains, or proposed multi tenant hierarchy. In addition, I could not find a 
 discussion about this topic leading to the approval.

Nova doesn't support keystone V3 domains or the proposed multi tenant
hierarchy (proposed after this BP) yet. Do you think this BP should be
blocked with those two as dependencies?


 For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
 discussion on how to really implement VPC, and eventually split it into 
 multiple real blueprints for each area.

 Please, provide feedback on the following document, and on the best way to 
 move this forward.

 https://wiki.openstack.org/wiki/Blueprint-VPC

 Thanks,

 JC.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-14 Thread Martin, JC
Arvind,

Thanks for point me to the blueprint. I'll add it to the related blueprints.

I think this could be part of the solution, but in addition to defining 
administrative boundaries, we need to change the way object sharing works. 
Today, there is only two levels : project private or public. You can share 
objects between projects but there is no single model across openstack to 
define resource scope, each component has a slightly different model. The VPC 
implementation will also have to address that.

JC

On Feb 14, 2014, at 11:26 AM, Tiwari, Arvind arvind.tiw...@hp.com wrote:

 Hi JC,
 
 I have proposed BP to address VPC using domain hierarchy and hierarchical 
 administrative boundary.
 
 https://blueprints.launchpad.net/keystone/+spec/hierarchical-administrative-boundary
 
 
 Thanks,
 Arvind
 -Original Message-
 From: Martin, JC [mailto:jch.mar...@gmail.com] 
 Sent: Friday, February 14, 2014 12:09 PM
 To: OpenStack Development Mailing List
 Subject: [openstack-dev] VPC Proposal
 
 
 There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
 AWS VPC api. I don't think that this blueprint is providing the necessary 
 constructs to really implement a VPC, and it is not taking into account the 
 domains, or proposed multi tenant hierarchy. In addition, I could not find a 
 discussion about this topic leading to the approval.
 
 For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
 discussion on how to really implement VPC, and eventually split it into 
 multiple real blueprints for each area.
 
 Please, provide feedback on the following document, and on the best way to 
 move this forward.
 
 https://wiki.openstack.org/wiki/Blueprint-VPC
 
 Thanks,
 
 JC.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Dan Prince


- Original Message -
 From: Robert Collins robe...@robertcollins.net
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org, 
 openstack-operat...@lists.openstack.org
 Sent: Thursday, February 13, 2014 4:27:20 PM
 Subject: [openstack-dev] [TripleO] consistency vs packages in TripleO
 
 So progressing with the 'and folk that want to use packages can' arc,
 we're running into some friction.
 
 I've copied -operators in on this because its very relevant IMO to operators
 :)
 
 So far:
  - some packages use different usernames
  - some put things in different places (and all of them use different
 places to the bare metal ephemeral device layout which requires
 /mnt/).
  - possibly more in future.
 
 Now, obviously its a 'small matter of code' to deal with this, but the
 impact on ops folk isn't so small. There are basically two routes that
 I can see:
 
 # A
  - we have a reference layout - install from OpenStack git / pypi
 releases; this is what we will gate on, and can document.
  - and then each distro (both flavor of Linux and also possibly things
 like Fuel that distribution OpenStack) is different - install on X,
 get some delta vs reference.
  - we need multiple manuals describing how to operate and diagnose
 issues in such a deployment, which is a matrix that overlays platform
 differences the user selects like 'Fedora' and 'Xen'.
 
 # B
  - we have one layout, with one set of install paths, usernames
  - package installs vs source installs make no difference - we coerce
 the package into reference upstream shape as part of installing it.
  - documentation is then identical for all TripleO installs, except
 the platform differences (as above - systemd on Fedora, upstart on
 Ubuntu, Xen vs KVM)
 
 B seems much more useful to our ops users - less subtly wrong docs, we
 avoid bugs where tools we write upstream make bad assumptions,
 experience operating a TripleO deployed OpenStack is more widely
 applicable (applies to all such installs, not just those that happened
 to use the same package source).
 
 I see this much like the way Nova abstracts out trivial Hypervisor
 differences to let you 'nova boot' anywhere, that we should be hiding
 these incidental (vs fundamental capability) differences.
 
 What say ye all?

Let me restate the options the way I see it:

Option A is we do our job... by making it possible to install OpenStack using 
various distributions using a set of distro agnostic tools (TripleO).

Option B is we make our job easy by strong arming everyone into the same 
defaults of our upstream choosing.

Does option B look appealing? Perhaps at first glance. By taking away the 
differences it seems like we are making everyone's lives easier by 
streamlining our depoyment codebase. There is this one rub though: it isn't 
what users expect. Take /mnt/state for example. This isn't the normal place for 
things to live. Why not use the read only root mechanism some distributions 
already have and work with that instead. Or perhaps have /mnt/state as a backup 
solution which can be used if a mechanism doesn't exist or is faulty?

In the end I think option A is the way we have to go. Is it more work... maybe. 
But in the end users will like us for it. And there is always the case that by 
not reimplementing some of the tools and mechanisms which already exist in 
distros that this ends up being less work anyways. I do hope so...

As for the reference implementation part of A I do hope we choose it wisely and 
that as much as possible the distros take note of our choices.

Dan

 
 -Robv
 
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-14 Thread Russell Bryant
On 02/13/2014 08:53 AM, Sean Dague wrote:
 The delays on reviews for relatively trivial fixes I think is
 something that is probably more demotivating to new folks than the
 lack of badges. So some ability to keep on top of that I think
 would be really great.

Sure, I agree.  I still think badges just sound fun.  :-)

It's at least something that can be automated.  Mentoring and such is
important, of course, but takes a much bigger time investment from a
much bigger group of people.

I think we can view improving mentoring and on-boarding as a slightly
different but related issue than badges, which could be a fun way to
recognize and reward contributions.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-14 Thread Martin, JC
Joe,

I will let others comment, but since I think this BP was proposed much before 
the multi-tenant hierarchy BP, I would like to at least have the discussion. I 
would suggest a pause until we agree that this is ok to move this forward 
independently of the multi tenant hierarchy proposal.

JC

On Feb 14, 2014, at 11:42 AM, Joe Gordon joe.gord...@gmail.com wrote:

 On Fri, Feb 14, 2014 at 12:09 PM, Martin, JC jch.mar...@gmail.com wrote:
 
 There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
 AWS VPC api. I don't think that this blueprint is providing the necessary 
 constructs to really implement a VPC, and it is not taking into account the 
 domains, or proposed multi tenant hierarchy. In addition, I could not find a 
 discussion about this topic leading to the approval.
 
 Nova doesn't support keystone V3 domains or the proposed multi tenant
 hierarchy (proposed after this BP) yet. Do you think this BP should be
 blocked with those two as dependencies?
 
 
 For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
 discussion on how to really implement VPC, and eventually split it into 
 multiple real blueprints for each area.
 
 Please, provide feedback on the following document, and on the best way to 
 move this forward.
 
 https://wiki.openstack.org/wiki/Blueprint-VPC
 
 Thanks,
 
 JC.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] promoting devtest_seed and devtest_undercloud to voting, + experimental queue for nova/neutron etc.

2014-02-14 Thread Robert Collins
Thanks to a massive push this week, both the seed *and* undercloud
jobs are now passing on tripleo-gate nodes, but they are not yet
voting.

I'd kind of like to get them voting on tripleo jobs (check only). We
don't have 2 clouds yet, so if the tripleo ci-cloud suffers a failure,
we'd have -1's everywhere. I think this would be an ok tradeoff (its
check after all), but I'd like -infra admin folks opinion on this -
would it cause operational headaches for you, over and above the
current risks w/ the tripleo-ci cloud?

OTOH - we actually got passing ops with a fully deployed virtual cloud
- which is awesome.

Now we need to push through to having the overcloud deploy tests pass,
then the other scenarios we depend on - upgrades w/rebuild, and we'll
be in good shape to start optimising (pre-heated clouds, local distro
mirrors etc) and broadening (other distros ...).

Lastly, I'm going to propose a merge to infra/config to put our
undercloud story (which exercises the seed's ability to deploy via
heat with bare metal) as a check experimental job on our dependencies
(keystone, glance, nova, neutron) - if thats ok with those projects?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] promoting devtest_seed and devtest_undercloud to voting, + experimental queue for nova/neutron etc.

2014-02-14 Thread Sean Dague
On 02/14/2014 03:43 PM, Robert Collins wrote:
 Thanks to a massive push this week, both the seed *and* undercloud
 jobs are now passing on tripleo-gate nodes, but they are not yet
 voting.
 
 I'd kind of like to get them voting on tripleo jobs (check only). We
 don't have 2 clouds yet, so if the tripleo ci-cloud suffers a failure,
 we'd have -1's everywhere. I think this would be an ok tradeoff (its
 check after all), but I'd like -infra admin folks opinion on this -
 would it cause operational headaches for you, over and above the
 current risks w/ the tripleo-ci cloud?
 
 OTOH - we actually got passing ops with a fully deployed virtual cloud
 - which is awesome.
 
 Now we need to push through to having the overcloud deploy tests pass,
 then the other scenarios we depend on - upgrades w/rebuild, and we'll
 be in good shape to start optimising (pre-heated clouds, local distro
 mirrors etc) and broadening (other distros ...).
 
 Lastly, I'm going to propose a merge to infra/config to put our
 undercloud story (which exercises the seed's ability to deploy via
 heat with bare metal) as a check experimental job on our dependencies
 (keystone, glance, nova, neutron) - if thats ok with those projects?
 
 -Rob
 

My biggest concern with adding this to check experimental, is the
experimental results aren't published back until all the experimental
jobs are done.

We've seen really substantial delays, plus a 5 day complete outage a
week ago, on the tripleo cloud. I'd like to see that much more proven
before it starts to impact core projects, even in experimental.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-14 Thread Nikolay Makhotkin
Dmitri, in our concerns under word 'input' we assume a block contains the
info about how the input data will be taken for corresponding task from
initial context. So, it will be a kind of expression (e.g. YAQL).

Renat, am I right?


On Fri, Feb 14, 2014 at 9:51 PM, Dmitri Zimine d...@stackstorm.com wrote:

 I like output, too. But it should go with 'input'
 In summary, there are two alternatives.
 Note that I moved task-parameters under parameters. Ok with this?

 actions:
my-action
   input:
  foo: bar
  task-parameters:
 flavor_id:
 image_id:
   output:
   select: '$.server_id'
   store_as: v1

 this maps to action(input, *output)

 actions:
my-action
   parameters:
  foo: bar
  task-parameters:
 flavor_id:
 image_id:
   result:
   select: '$.server_id'
   store_as: v1

 this maps to result=action(parameters)


 On Feb 14, 2014, at 8:40 AM, Renat Akhmerov rakhme...@mirantis.com
 wrote:

 output looks nice!


 Renat Akhmerov
 @ Mirantis Inc.

 On 14 Feb 2014, at 20:26, Nikolay Makhotkin nmakhot...@mirantis.com
 wrote:

 Current DSL snippet:
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results'
   select: '$.server_id'
   store_as: v1

 'result' sounds better than 'response' and, I think, more fit to action
 description.
 And I suggest for a new word - 'output'; actually, this block describe how
 the output will be taken and stored.

 However, I agree that this block should be at action-property level:

 actions:
my-action
   result:
  select: '$.server_id'
  store_as: vm_id
   parameters:
  foo: bar



 On Fri, Feb 14, 2014 at 12:36 PM, Renat Akhmerov 
 rakhme...@mirantis.comwrote:


 On 14 Feb 2014, at 15:02, Dmitri Zimine d...@stackstorm.com wrote:

 Current DSL snippet:
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results'


 Just a note: response indentation here is not correct, it's not a
 parameter called response but rather a property of my-action.

   select: '$.server_id'
   store_as: v1

 In the code, we refer to action.result_helper

 1) Note that *response* is not exactly a parameter. It doesn't doesn't
 refer to data. It's  (query, variable) pairs, that are used to parse the
 results and post data to global context [1]. The terms response, or result,
 do not reflect what is actually happening here. Suggestions? Save? Publish?
 Result Handler?


 For explicitness we can use something like result-handler and initially
 I thought about this option. But I personally love conciseness and I think
 name result would be ok for this section meaning it defines the structure
 of the action result. handler is not 100% precise too because we don't
 actually handle a result here, we define the rules how to get this result.

 I would appreciate to see other suggestions though.

 2) Whichever name we use for this output transformer, shall it be under
 parameters?


 No, what we have in this section is like a return value type for a
 regular method. Parameters define action input.

 3) how do we call action/task parameters? Think 'model' (which reflects
 in yaml,  code, docs, talk, etc.)
input and output? (+1)
in and out? (-1)
request and response? (-1) good for WebServices but not generic enough
parameters and results? (ok)


 Could you please clarify your questions here? Not sure I'm following...

 4) Syntax simplification: can we drop 'parameters' keyword? Anything
 under action is action parameters, unless it's a reserved keyword, which
 the implementation can parse out.

 actions:
my-action
   foo: bar
   task-parameters: # not a keyword, specific to REST_API
   flavor_id:
   image_id:
   publish:  # keyword
   select: '$.server_id'
   store_as: v1


 It will create problems like name ambiguity in case we need to have a
 parameter with the same names as keywords (task-parameters and publish
 in your example). My preference would be to leave it explicit.

 Renat


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
 Best Regards,
 Nikolay
  ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list

Re: [openstack-dev] Devstack installation failed with CINDER installation

2014-02-14 Thread Ben Nemec

On 2014-02-13 20:44, Jeremy Stanley wrote:

On 2014-02-13 10:56:28 -0600 (-0600), Ben Nemec wrote:
[...]

configure pip to use the pypi.openstack.org mirror.

[...]

While this is sometimes a useful hack for working around
intermittent PyPI CDN growing pains on your personal development
workstation, or maybe for ferreting out whether your local tests are
getting different results because of varied package set between PyPI
and our mirror, I fear that some people reading this might assume
it's a stable public service and encode it into production
configuration.

The pypi.openstack.org mirror is just a single VM, while
pypi.python.org has CDN services fronting it for improved
reachability, reliability and scalability. In fact,
pypi.openstack.org resides on the same single-point-of-failure VM
which also provides access to build logs and lots of other data.
It's intended mostly as a place for our automated build systems to
look for packages so as not to hammer actual PyPI constantly and to
provide us an additional layer of control over what we test with. It
is *not* secure. Let me reiterate that point. It is for test jobs,
so the content is served via plain unencrypted HTTP *only* and is
therefore easily modified by a man-in-the-middle attack. It's also
not guaranteed to be around indefinitely, or to necessarily be
reachable outside the cloud provider networks where testing is
performed, or to carry all the packages you may need, or to have
enough bandwidth available to serve the entire user base, or to be
up and on line 100% of the time, or...

...you get the idea.


And yet it's still way, way more stable than official pypi, at least in 
my experience. :-)


But point taken.  I will make sure to include a disclaimer in the 
future.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-14 Thread Jay Pipes
On Fri, 2014-02-14 at 11:30 -0800, Greg C wrote:
 
 On Thu, Feb 13, 2014 at 6:38 AM, David Kranz dkr...@redhat.com
 wrote:
 
 Defaults in conf files seem to be one of the following:
 
 - Generic, appropriate for most situations
 - Appropriate for devstack
 - Appropriate for small, distro-based deployment
 - Approprate for large deployment
 
 
  In my experiences in creating OpenStack production systems, it
 appears the answer is mostly Appropriate for devstack.  I haven't
 used devstack myself, only created production systems from Openstack
 releases.  For practically every openstack component the message queue
 config is missing

Actually, every OpenStack component has MQ configs in their conf files,
and well documented:

Nova MQ configs:

https://github.com/openstack/nova/blob/master/etc/nova/nova.conf.sample#L9-L187

Cinder:

https://github.com/openstack/cinder/blob/master/etc/cinder/cinder.conf.sample#L610-L815

Keystone:

https://github.com/openstack/keystone/blob/master/etc/keystone.conf.sample#L141-L163

Glance:

https://github.com/openstack/glance/blob/master/etc/glance-api.conf#L246-L283

Neutron:

https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L105-L175

Ceilometer:

https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/ceilometer.conf.sample#L305-L475

 , and every api-paste.ini needs a [filter:authtoken] section for
 keystone.

Actually, every OpenStack project has this in their regular conf files:

Nova:

https://github.com/openstack/nova/blob/master/etc/nova/nova.conf.sample#L2624

Cinder:

https://github.com/openstack/cinder/blob/master/etc/cinder/cinder.conf.sample#L1879

Glance:

https://github.com/openstack/glance/blob/master/etc/glance-api.conf#L551

Neutron:

https://github.com/openstack/neutron/blob/master/etc/neutron.conf#L332

Ceilometer:

https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/ceilometer.conf.sample#L713

 It appears to me that those things are somehow covered when using
 devstack.  Besides having to be added, the pitfall this creates is
 that documentation for releases will not point out that they need to
 be added and configured, because somehow devstack doesn't require it.

I'm not sure why you think this. What configuration management system
are you using to deploy OpenStack?

 I use a VLAN model of networking as well, which as far as I can tell
 devstack doesn't test/support, 

Incorrect. export NETWORK_MANAGER=VlanManager in your localrc.

 so I have to chase down a bunch of other config items that are missing
 and scarcely documented.  The whole thing is quite a chore.  I don't
 know why those common keystone and message queue configs can't be in
 there from the start.

They are.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gamification and on-boarding ...

2014-02-14 Thread Ben Nemec

On 2014-02-14 13:48, Russell Bryant wrote:

On 02/13/2014 08:53 AM, Sean Dague wrote:

The delays on reviews for relatively trivial fixes I think is
something that is probably more demotivating to new folks than the
lack of badges. So some ability to keep on top of that I think
would be really great.


Sure, I agree.  I still think badges just sound fun.  :-)

It's at least something that can be automated.  Mentoring and such is
important, of course, but takes a much bigger time investment from a
much bigger group of people.

I think we can view improving mentoring and on-boarding as a slightly
different but related issue than badges, which could be a fun way to
recognize and reward contributions.


As an added bonus, if I'm looking for someone who knows about doing 
$SOMETHING in OpenStack, I could potentially just go look up who has the 
$SOMETHING badge and ask them.


To me it mostly comes down to how much work it ends up being.  If it 
takes a huge effort to make this happen then there are probably better 
places to spend that effort.  But if we can make it happen without a 
huge investment of time then I think it could be worthwhile.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] call for help with nova bug management

2014-02-14 Thread Tracy Jones
Hi Folks - I’ve offered to help Russell out with managing nova’s bug queue.  
The charter of this is as follows

Triage the 125 new bugs
Ensure that the critical bugs are assigned properly and are making progress

Once this part is done we will shift our focus to things like
Bugs in incomplete state with no update by the reporter - they should be set to 
invalid if they requester does not update them in a timely manner.
Bugs which say they are in progress but no progress is being made.   If a bug 
is assigned and simply being ignored we should remove the assignment so others 
can grab it and work on it

The bug triage policy is defined here https://wiki.openstack.org/wiki/BugTriage


What can you do???  First I need a group of folks to volunteer to help with 1 
and 2.  I will start a weekly IRC meeting where we work on the triage and check 
progress on critical (or even high) prio bugs.  If you can help out, please 
sign up at the end of this etherpad and include your timezone.  Once I have a 
few people to help i will schedule the meeting at a time that I hope is 
convenient for all.

https://etherpad.openstack.org/p/nova-bug-management

Thanks in advance for your help.

Tracy___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-14 Thread Jay Pipes
On Thu, 2014-02-13 at 00:05 +, Adrian Otto wrote:
 If they want something more comprehensive, including a full set of open 
 source best practices by default, such as entrance and exit gating, hosted 
 code review and collaboration, It would be really nice to have a full 
 Zuul+Nodepool+Jenkins+Gerrit setup with some integration points where they 
 could potentially customize it.

You might be interested in a couple articles I recently wrote around
this very topic! :)

http://www.joinfu.com/2014/01/understanding-the-openstack-ci-system/
http://www.joinfu.com/2014/02/setting-up-an-external-openstack-testing-system/

Cheers,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-14 Thread Jay Pipes
On Thu, 2014-02-13 at 14:18 +0100, Julien Vey wrote:
 I agree gating is a great feature but it is not useful for every
 project and as Adrian said, not understood by everyone.
 I think many Solum users, and PaaS users in general, are
 single-project/single-build/simple git worklow and do not care about
 gating.

This is 100% correct.

 I see 2 drawbacks with Zuul :
 - Tenant Isolation : How do we allow access on zuul (and jenkins) for
 a specific tenant in isolation to the others tenants using Solum.
 - Build customization : One of the biggest advantage of Jenkins is its
 ecosystem and the many build customization it offers. Using zuul will
 prohibit this.

Not sure I understand this part... Zuul works *with* Jenkins. It does
not replace it.

 About Gerrit, I think it is also a little too much. Many users have
 their own reviewing system, Pull requests with github, bitbucket or
 stash, their own instance of gerrit, or even a custom git workflow.
 Gerrit would be a great feature for future versions of Solum. but only
 as an optionnal one, we should not force people into it.

Completely agreed. Frankly, both Gerrit and Jenkins are a giant pain in
the ass to install, maintain, and configure (hmm, which Java/Prolog
programs aren't, I wonder?).

Basing Solum's *default* workflow on these tools would be a mistake IMO.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: filter based on the load averages of the host

2014-02-14 Thread yunhong jiang
On Fri, 2014-02-14 at 15:29 +, sahid wrote:
 Greetings,
 
 I would like to add a new filter based on the load averages.
 
 This filter will use the command uptime and will provides an option to choice 
 a
 period between 1, 5, and 15 minutes and an option to choice the max load
 average (a float between 0 and 1).
 
 Why:
   During a scheduling it could be useful to exclude a host that have a too
 heavy load and the command uptime (available in all linux system) 
 can return a load average of the system in different periods.
 
 About the implementation:
   Currently 'all' drivers (libvirt, xenapi, vmware) supports a method
 get_host_uptime that returns the output of the command 'uptime'. We have to 
 add
 in compute/stats.py a new method calculate_loadavg() that returns based on the
 output of driver.get_host_uptime() from compute/ressource_tracker.py a well
 formatted tuple of load averages for each periods. We also need to update
 api/openstack/compute/contrib/hypervisors.py to take care of this new
 field.
 
   The implementation will be divided in several parts:
 * Add to host_manager the possibility to get the loads_averages
 * Implement the filter based on this new property
 * Implement the filter with a per-aggregate configuration
 
 The blueprint: https://blueprints.launchpad.net/nova/+spec/filter-based-uptime
 
 I will be happy to get any comments about this filter, perharps it is not 
 implemented
 yet because of something I didn't see or my thinking of the implementation is 
 wrong.
 
 PS: I have checked metrics and cpu_resource but It does not get an averages 
 of the
 system load or perhaps I have not understand all.
 
 Thanks a lot,
 s.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think load average has more than CPU, you need consider like I/O
usage, or even other metrics. Maybe you can have a look at the
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling ?

Also IMHO the policy of exclude a host that have a too heavy load is
not so clean, would it be better to keep the usage as a scheduler
weight?

Thanks
--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] User Signup

2014-02-14 Thread Adam Young

On 02/01/2014 12:24 PM, Saju M wrote:

Hi folks,

Could you please spend 5 minutes on the blueprint 
https://blueprints.launchpad.net/horizon/+spec/user-registration and 
add your suggestions in the white board.



Thanks,


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Does it make sense for this to be in Keystone first, and then Horizon 
just consumes it?  I would think that user-registration-request would 
be a reasonable Keystone extension.  Then, you would add a role  
user-approver  for a specific domain to approve a user, which would 
trigger the create event.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] bad default values in conf files

2014-02-14 Thread Dirk Müller
 were not appropriate for real deployment, and our puppet modules were
 not providing better values
 https://bugzilla.redhat.com/show_bug.cgi?id=1064061.

I'd agree that raising the caching timeout is a not a good production
default choice. I'd also argue that the underlying issue is fixed
with https://review.openstack.org/#/c/69884/

In our testing this patch has speed up the revocation retrieval by factor 120.

 The default probably is too low, but raising it too high will cause
 concern with those who want revoked tokens to take effect immediately
 and are willing to scale the backend to get that result.

I agree, and changing defaults has a cost as well: Every deployment
solution out there has to detect the value change, update their config
templates and potentially also migrate the setting from the old to the
new default for existing deployments. Being in that situation, it has
happened that we were surprised by default changes that had
undesireable side effects, just because we chose to overwrite a
different default elsewhere.

I'm totally on board with having production ready defaults, but that
also includes that they seldomly change and change only for a very
good, possibly documented reason.


Greetings,
Dirk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [TripleO] promoting devtest_seed and devtest_undercloud to voting, + experimental queue for nova/neutron etc.

2014-02-14 Thread James E. Blair
Sean Dague s...@dague.net writes:

 On 02/14/2014 03:43 PM, Robert Collins wrote:
 Thanks to a massive push this week, both the seed *and* undercloud
 jobs are now passing on tripleo-gate nodes, but they are not yet
 voting.
 
 I'd kind of like to get them voting on tripleo jobs (check only). We
 don't have 2 clouds yet, so if the tripleo ci-cloud suffers a failure,
 we'd have -1's everywhere. I think this would be an ok tradeoff (its
 check after all), but I'd like -infra admin folks opinion on this -
 would it cause operational headaches for you, over and above the
 current risks w/ the tripleo-ci cloud?

You won't end up with -1's everywhere, you'll end up with jobs stuck in
the queue indefinitely, as we saw when the tripleo cloud failed
recently.  What's worse is that now that positive check results are
required for enqueuing into the gate, you will also not be able to merge
anything.

 OTOH - we actually got passing ops with a fully deployed virtual cloud
 - which is awesome.

Great! :)

 Now we need to push through to having the overcloud deploy tests pass,
 then the other scenarios we depend on - upgrades w/rebuild, and we'll
 be in good shape to start optimising (pre-heated clouds, local distro
 mirrors etc) and broadening (other distros ...).
 
 Lastly, I'm going to propose a merge to infra/config to put our
 undercloud story (which exercises the seed's ability to deploy via
 heat with bare metal) as a check experimental job on our dependencies
 (keystone, glance, nova, neutron) - if thats ok with those projects?

 -Rob

 My biggest concern with adding this to check experimental, is the
 experimental results aren't published back until all the experimental
 jobs are done.

 We've seen really substantial delays, plus a 5 day complete outage a
 week ago, on the tripleo cloud. I'd like to see that much more proven
 before it starts to impact core projects, even in experimental.

Until the tripleo cloud is multi-region, HA, and has a proven track
record of reliability, we can't have jobs that run on its nodes in any
pipeline for any non-tripleo project, for those reasons.  I do look
forward to when that is the case.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Notes on action YAML declaration and naming

2014-02-14 Thread Dmitri Zimine
Ok, I see.  

Do we have a spec that describes this?
Lets spell it out and describe the whole picture of input, output, parameters, 
and result. 

DZ 


On Feb 14, 2014, at 1:01 PM, Nikolay Makhotkin nmakhot...@mirantis.com wrote:

 Dmitri, in our concerns under word 'input' we assume a block contains the 
 info about how the input data will be taken for corresponding task from 
 initial context. So, it will be a kind of expression (e.g. YAQL). 
 
 Renat, am I right?
 
 
 On Fri, Feb 14, 2014 at 9:51 PM, Dmitri Zimine d...@stackstorm.com wrote:
 I like output, too. But it should go with 'input'
 In summary, there are two alternatives. 
 Note that I moved task-parameters under parameters. Ok with this?
 
 actions:
my-action
   input:
  foo: bar
  task-parameters: 
 flavor_id:
 image_id:
   output: 
   select: '$.server_id'  
   store_as: v1
 
 this maps to action(input, *output)
 
 actions:
my-action
   parameters:
  foo: bar
  task-parameters: 
 flavor_id:
 image_id:
   result: 
   select: '$.server_id'  
   store_as: v1
 
 this maps to result=action(parameters)
 
 
 On Feb 14, 2014, at 8:40 AM, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 “output” looks nice!
 
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 On 14 Feb 2014, at 20:26, Nikolay Makhotkin nmakhot...@mirantis.com wrote:
 
 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results' 
   select: '$.server_id'  
   store_as: v1
 
 'result' sounds better than 'response' and, I think, more fit to action 
 description.
 And I suggest for a new word - 'output'; actually, this block describe how 
 the output will be taken and stored.
 
 However, I agree that this block should be at action-property level:
 
 actions:
my-action
   result: 
  select: '$.server_id'  
  store_as: vm_id
   parameters:
  foo: bar
   
 
 
 On Fri, Feb 14, 2014 at 12:36 PM, Renat Akhmerov rakhme...@mirantis.com 
 wrote:
 
 On 14 Feb 2014, at 15:02, Dmitri Zimine d...@stackstorm.com wrote:
 
 Current DSL snippet: 
 actions:
my-action
   parameters:
   foo: bar
   response: # just agreed to change to 'results’ 
 
 Just a note: “response” indentation here is not correct, it’s not a 
 parameter called “response” but rather a property of “my-action”.
 
   select: '$.server_id'  
   store_as: v1
 
 In the code, we refer to action.result_helper
 
 1) Note that response is not exactly a parameter. It doesn't doesn't refer 
 to data. It's  (query, variable) pairs, that are used to parse the results 
 and post data to global context [1]. The terms response, or result, do not 
 reflect what is actually happening here. Suggestions? Save? Publish? 
 Result Handler? 
 
 For explicitness we can use something like “result-handler” and initially I 
 thought about this option. But I personally love conciseness and I think 
 name “result” would be ok for this section meaning it defines the structure 
 of the action result. “handler” is not 100% precise too because we don’t 
 actually handle a result here, we define the rules how to get this result.
 
 I would appreciate to see other suggestions though.
 
 2) Whichever name we use for this output transformer, shall it be under 
 parameters?
 
 No, what we have in this section is like a return value type for a regular 
 method. Parameters define action input.
 
 3) how do we call action/task parameters? Think 'model' (which reflects in 
 yaml,  code, docs, talk, etc.)
input and output? (+1)
in and out? (-1)  
request and response? (-1) good for WebServices but not generic enough
parameters and results? (ok)
 
 Could you please clarify your questions here? Not sure I’m following...
 
 4) Syntax simplification: can we drop 'parameters' keyword? Anything under 
 action is action parameters, unless it's a reserved keyword, which the 
 implementation can parse out. 
 
 actions:
my-action
   foo: bar
   task-parameters: # not a keyword, specific to REST_API
   flavor_id:
   image_id:
   publish:  # keyword
   select: '$.server_id'  
   store_as: v1
 
 It will create problems like name ambiguity in case we need to have a 
 parameter with the same names as keywords (“task-parameters” and “publish” 
 in your example). My preference would be to leave it explicit.
 
 Renat
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 -- 
 Best Regards,
 Nikolay
 ___
 OpenStack-dev mailing list
 

Re: [openstack-dev] VPC Proposal

2014-02-14 Thread Rudra Rugge
Hi JC,

We agree with your proposed model of a VPC resource object. Proposal you are 
making makes sense to us and we would like to collaborate further on this. 
After reading your blueprint two things come to mind.

1. VPC vision for Openstack? (Your blueprint is proposing this vision)
2. Providing AWS VPC api compatibility with current constrains of openstack 
structure.

The blueprint that we proposed targets #2. 
It gives a way to implement AWS VPC api compatible API. This helps subset of 
customers to migrate their workloads from AWS to openstack based clouds. In our 
implementation we tied VPC to project. That was easiest way to keep isolation 
with current structure. We agree that what you are proposing is more generic. 
One to way is to implement our current proposal to have one VPC to one project 
mapping. As your blueprint matures we will
move VPC to multiple project mapping.

We feel that instead of throwing away all the work done we can take an 
incremental approach.

Regards,
Rudra


On Feb 14, 2014, at 11:09 AM, Martin, JC jch.mar...@gmail.com wrote:

 
 There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
 AWS VPC api. I don't think that this blueprint is providing the necessary 
 constructs to really implement a VPC, and it is not taking into account the 
 domains, or proposed multi tenant hierarchy. In addition, I could not find a 
 discussion about this topic leading to the approval.
 
 For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
 discussion on how to really implement VPC, and eventually split it into 
 multiple real blueprints for each area.
 
 Please, provide feedback on the following document, and on the best way to 
 move this forward.
 
 https://wiki.openstack.org/wiki/Blueprint-VPC
 
 Thanks,
 
 JC.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-14 Thread Fox, Kevin M
I think a lot of projects don't bother to gate, because its far to much work to 
set up a workable system.

I can think of several projects I've worked on that would benefit from it but 
haven't because of time/cost of setting it up.

If I could just say solum create project foo and get it, I'm sure it would be 
much more used.

The same has been said of Unit tests and CI in the past. We don't need it. 
When you give someone a simple to use system though, they see its value pretty 
quickly.

Yeah, gerrit and jenkins are a pain to setup. Thats one of the things that 
might make solum great. That it removes that pain.

Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Friday, February 14, 2014 2:51 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

On Thu, 2014-02-13 at 14:18 +0100, Julien Vey wrote:
 I agree gating is a great feature but it is not useful for every
 project and as Adrian said, not understood by everyone.
 I think many Solum users, and PaaS users in general, are
 single-project/single-build/simple git worklow and do not care about
 gating.

This is 100% correct.

 I see 2 drawbacks with Zuul :
 - Tenant Isolation : How do we allow access on zuul (and jenkins) for
 a specific tenant in isolation to the others tenants using Solum.
 - Build customization : One of the biggest advantage of Jenkins is its
 ecosystem and the many build customization it offers. Using zuul will
 prohibit this.

Not sure I understand this part... Zuul works *with* Jenkins. It does
not replace it.

 About Gerrit, I think it is also a little too much. Many users have
 their own reviewing system, Pull requests with github, bitbucket or
 stash, their own instance of gerrit, or even a custom git workflow.
 Gerrit would be a great feature for future versions of Solum. but only
 as an optionnal one, we should not force people into it.

Completely agreed. Frankly, both Gerrit and Jenkins are a giant pain in
the ass to install, maintain, and configure (hmm, which Java/Prolog
programs aren't, I wonder?).

Basing Solum's *default* workflow on these tools would be a mistake IMO.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-14 Thread Sukhdev Kapur
On Thu, Feb 13, 2014 at 12:39 PM, Jay Pipes jaypi...@gmail.com wrote:

 On Thu, 2014-02-13 at 12:34 -0800, Sukhdev Kapur wrote:
  Jay,
 
  Just an FYI. We have modified the Gerrit plugin it accept/match regex
  to generate notifications of for receck no bug/bug ###. It turned
  out to be very simple fix and we (Arista Testing) is now triggering on
  recheck comments as well.

 Thanks for the update, Sukhdev! Is this updated Gerrit plugin somewhere
 where other folks can use it?



Yes the patch is ready.  I am documenting it as a part of overall
description of Arista Testing Setup and will be releasing soon as part of
the document that I am writing.
Hopefully next week.

regards..
-Sukhdev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-14 Thread Martin, JC
Rudra,

I do not agree that the current proposal provides the semantic of a VPC. If the 
goal is to only provide a facade through the EC2 API, it may address this, but 
unless you implement the basic features of a VPC, what good is it doing ?

I do believe that the work can be done incrementally if we agree on the basic 
properties of a VPC, for example :
   - allowing projects to be created while using resources defined at the VPC 
level
   - preventing resources not explicitly defined at the VPC level to be used by 
a VPC.

I do not see in the current proposal how resources are scoped to a VPC, and 
how, for example, you prevent shared network to be used within a VPC, or how 
you can define shared networks (or other shared resources) to only be scoped to 
a VPC.

I think we already raised our concern to you several months ago, but it did not 
seem to have been addressed in the current proposal.

thanks,

JC

On Feb 14, 2014, at 3:50 PM, Rudra Rugge rru...@juniper.net wrote:

 Hi JC,
 
 We agree with your proposed model of a VPC resource object. Proposal you are 
 making makes sense to us and we would like to collaborate further on this. 
 After reading your blueprint two things come to mind.
 
 1. VPC vision for Openstack? (Your blueprint is proposing this vision)
 2. Providing AWS VPC api compatibility with current constrains of openstack 
 structure.
 
 The blueprint that we proposed targets #2. 
 It gives a way to implement AWS VPC api compatible API. This helps subset 
 of customers to migrate their workloads from AWS to openstack based clouds. 
 In our implementation we tied VPC to project. That was easiest way to keep 
 isolation with current structure. We agree that what you are proposing is 
 more generic. One to way is to implement our current proposal to have one VPC 
 to one project mapping. As your blueprint matures we will
 move VPC to multiple project mapping.
 
 We feel that instead of throwing away all the work done we can take an 
 incremental approach.
 
 Regards,
 Rudra
 
 
 On Feb 14, 2014, at 11:09 AM, Martin, JC jch.mar...@gmail.com wrote:
 
 
 There is a Blueprint targeted for Icehouse-3 that is aiming to implement the 
 AWS VPC api. I don't think that this blueprint is providing the necessary 
 constructs to really implement a VPC, and it is not taking into account the 
 domains, or proposed multi tenant hierarchy. In addition, I could not find a 
 discussion about this topic leading to the approval.
 
 For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
 discussion on how to really implement VPC, and eventually split it into 
 multiple real blueprints for each area.
 
 Please, provide feedback on the following document, and on the best way to 
 move this forward.
 
 https://wiki.openstack.org/wiki/Blueprint-VPC
 
 Thanks,
 
 JC.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Need a new DSL for Murano

2014-02-14 Thread Alexander Tivelkov
Hi folks,

Murano matures, and we are getting more and more feedback from our early
adopters. The overall reception is very positive, but at the same time
there are some complaints as well. By now the most significant complaint is
is hard to write workflows for application deployment and maintenance.

Current version of workflow definition markup really have some design
drawbacks which limit its potential adoption. They are caused by the fact
that it was never intended for use for Application Catalog use-cases.

I'll briefly touch these drawbacks first:


   1. Murano's workflow engine is actually a state machine, however the
   workflow markup does not explicitly define the states and transitions.
   2. There is no data isolation within any environment, which causes both
   potential security vulnerabilities and unpredictable workflow behaviours.
   3. There is no easy way to reuse the workflows and their related
   procedures between several applications.
   4. The markup uses JSONPath, which relies on Python's 'eval' function.
   This is insecure and has to be avoided.
   5. 5. The workflow markup is XML-based, which is not a common practice
   in the OpenStack community.

So, it turns out that we have to design and implement a new workflow
definition notation, which will not have any of the issues mentioned above.

At the same time, it should still allow to fully specify the configuration
of any third-party Application, its dependencies with other Applications
and define specific actions which are required for Application deployment,
configuration and life cycle management.

This new notation should allow to do the following:


   -

   List all the required configuration parameters and dependencies for a
   given application
   -

   Validate user input and match it to the defined parameters
   -

   Define specific deployment actions and their execution order
   -

   Define behaviors to handle the events of changes in application's
   environment


Also, it should satisfy the following requirements:


   -

   Minimize the amount of configuration for common application parts, i.e.
   reuse existing configuration parts and add only difference specific to the
   application.
   -

   Allow to use different deployment tools with using the same markup
   constructs. i.e. provide a high-level abstraction on the underlying tools
   (heat, shell, chef, puppet etc)
   -

   For security reasons it should NOT allow to execute arbitrary operations
   - i.e. should allow to run only predefined set of meaningful configuration
   actions.



So, I would suggest to introduce a simple and domain specific notation
which would satisfy these needs:

   -

   Application dependencies and configuration properties are defined
   declaratively, in a way similar to how it is done in Heat templates.
   -

   Each property has special constraints and rules, allowing to validate
   the input and applications relationship within the environment.
   -

   The workflows are defined in imperative way: as a sequence of actions or
   method calls. This may include assigning data variables or calling the
   workflows of other applications.
   -

   All of these may be packaged in a YAML format. The example may look
   something like this [1]


The final version may become a bit more complicated, but as the starting
point this should look fine. I suggest to cover this in more details on our
next IRC meeting on Tuesday.

Any feedback or suggestions are appreciated.


[1] https://etherpad.openstack.org/p/murano-new-dsl-example

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Need a new DSL for Murano

2014-02-14 Thread Joshua Harlow
An honest question,

U are mentioning what appears to be the basis for a full programming language 
(variables, calling other workflows - similar to functions) but then u mention 
this is being stuffed into yaml.

Why?

It appears like u might as well spend the effort and define a grammar and 
simplistic language that is not stuffed inside yaml. Shoving one into yaml 
syntax seems like it gets u none of the benefits of syntax checking, parsing, 
validation (highlighting...) and all the pain of yaml.

Something doesn't seem right about the approach of creating languages inside 
the yaml format (in a way it becomes like xsl, yet xsl at least has a spec and 
is well defined).

My 2 cents

Sent from my really tiny device...

On Feb 14, 2014, at 7:22 PM, Alexander Tivelkov 
ativel...@mirantis.commailto:ativel...@mirantis.com wrote:


Hi folks,


Murano matures, and we are getting more and more feedback from our early 
adopters. The overall reception is very positive, but at the same time there 
are some complaints as well. By now the most significant complaint is is hard 
to write workflows for application deployment and maintenance.

Current version of workflow definition markup really have some design drawbacks 
which limit its potential adoption. They are caused by the fact that it was 
never intended for use for Application Catalog use-cases.


I'll briefly touch these drawbacks first:

  1.  Murano's workflow engine is actually a state machine, however the 
workflow markup does not explicitly define the states and transitions.
  2.  There is no data isolation within any environment, which causes both 
potential security vulnerabilities and unpredictable workflow behaviours.
  3.  There is no easy way to reuse the workflows and their related procedures 
between several applications.
  4.  The markup uses JSONPath, which relies on Python’s 'eval' function. This 
is insecure and has to be avoided.
  5.  5. The workflow markup is XML-based, which is not a common practice in 
the OpenStack community.

So, it turns out that we have to design and implement a new workflow definition 
notation, which will not have any of the issues mentioned above.

At the same time, it should still allow to fully specify the configuration of 
any third-party Application, its dependencies with other Applications and 
define specific actions which are required for Application deployment, 
configuration and life cycle management.

This new notation should allow to do the following:


  *   List all the required configuration parameters and dependencies for a 
given application

  *   Validate user input and match it to the defined parameters

  *   Define specific deployment actions and their execution order

  *   Define behaviors to handle the events of changes in application’s 
environment


Also, it should satisfy the following requirements:


  *   Minimize the amount of configuration for common application parts, i.e. 
reuse existing configuration parts and add only difference specific to the 
application.

  *   Allow to use different deployment tools with using the same markup 
constructs. i.e. provide a high-level abstraction on the underlying tools 
(heat, shell, chef, puppet etc)

  *   For security reasons it should NOT allow to execute arbitrary operations 
- i.e. should allow to run only predefined set of meaningful configuration 
actions.



So, I would suggest to introduce a simple and domain specific notation which 
would satisfy these needs:

  *   Application dependencies and configuration properties are defined 
declaratively, in a way similar to how it is done in Heat templates.

  *   Each property has special constraints and rules, allowing to validate the 
input and applications relationship within the environment.

  *   The workflows are defined in imperative way: as a sequence of actions or 
method calls. This may include assigning data variables or calling the 
workflows of other applications.

  *   All of these may be packaged in a YAML format. The example may look 
something like this [1]


The final version may become a bit more complicated, but as the starting point 
this should look fine. I suggest to cover this in more details on our next IRC 
meeting on Tuesday.


Any feedback or suggestions are appreciated.



[1] https://etherpad.openstack.org/p/murano-new-dsl-example

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-14 Thread Harshad Nakil
Hi JC,

You have put it aptly. Goal of the blueprint is to present facade for
AWS VPC API as the name suggest.
As per your definition of VPC, shared network will have issues.
However many of these concepts are not exposed to a AWS customers and
the API work well.
While we work incrementally towards your definition of VPC we can
maintain API compatibility to AWS API that we are proposing. As we are
subset of your proposal and don't expose all features within VPC.

Regards
-Harshad


 On Feb 14, 2014, at 6:22 PM, Martin, JC jch.mar...@gmail.com wrote:

 Rudra,

 I do not agree that the current proposal provides the semantic of a VPC. If 
 the goal is to only provide a facade through the EC2 API, it may address 
 this, but unless you implement the basic features of a VPC, what good is it 
 doing ?

 I do believe that the work can be done incrementally if we agree on the basic 
 properties of a VPC, for example :
   - allowing projects to be created while using resources defined at the VPC 
 level
   - preventing resources not explicitly defined at the VPC level to be used 
 by a VPC.

 I do not see in the current proposal how resources are scoped to a VPC, and 
 how, for example, you prevent shared network to be used within a VPC, or how 
 you can define shared networks (or other shared resources) to only be scoped 
 to a VPC.

 I think we already raised our concern to you several months ago, but it did 
 not seem to have been addressed in the current proposal.

 thanks,

 JC

 On Feb 14, 2014, at 3:50 PM, Rudra Rugge rru...@juniper.net wrote:

 Hi JC,

 We agree with your proposed model of a VPC resource object. Proposal you are 
 making makes sense to us and we would like to collaborate further on this. 
 After reading your blueprint two things come to mind.

 1. VPC vision for Openstack? (Your blueprint is proposing this vision)
 2. Providing AWS VPC api compatibility with current constrains of openstack 
 structure.

 The blueprint that we proposed targets #2.
 It gives a way to implement AWS VPC api compatible API. This helps subset 
 of customers to migrate their workloads from AWS to openstack based clouds. 
 In our implementation we tied VPC to project. That was easiest way to keep 
 isolation with current structure. We agree that what you are proposing is 
 more generic. One to way is to implement our current proposal to have one 
 VPC to one project mapping. As your blueprint matures we will
 move VPC to multiple project mapping.

 We feel that instead of throwing away all the work done we can take an 
 incremental approach.

 Regards,
 Rudra


 On Feb 14, 2014, at 11:09 AM, Martin, JC jch.mar...@gmail.com wrote:


 There is a Blueprint targeted for Icehouse-3 that is aiming to implement 
 the AWS VPC api. I don't think that this blueprint is providing the 
 necessary constructs to really implement a VPC, and it is not taking into 
 account the domains, or proposed multi tenant hierarchy. In addition, I 
 could not find a discussion about this topic leading to the approval.

 For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
 discussion on how to really implement VPC, and eventually split it into 
 multiple real blueprints for each area.

 Please, provide feedback on the following document, and on the best way to 
 move this forward.

 https://wiki.openstack.org/wiki/Blueprint-VPC

 Thanks,

 JC.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][EC2] attach and detach volume response status

2014-02-14 Thread Rui Chen
Hi Stackers;

I use Nova EC2 interface to attach a volume, attach success, but volume
status is detached in message response.

# euca-attach-volume -i i-000d -d /dev/vdb vol-0001
ATTACHMENT  vol-0001i-000d  detached

This make me confusion, I think the status should be attaching or in-use.

I find attach and detach volume interfaces return volume['attach_status'],
but describe volume interface return volume['status']

Is it a bug? or for other considerations I do not know.

Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Robert Collins
On 15 February 2014 07:46, Clint Byrum cl...@fewbar.com wrote:

 * Reference implementations are always derided as not realistic. I think
   we need to think of a different term. I prefer to just say that this
   is the upstream implementation. We very much expect that a cloud can
   and should operate with this model unmodified. Packagers are doing so
   to fit OpenStack into a greater support model, not because nobody
   would ever want to run upstream. I think of how a large portion of
   MySQL users tend to just run upstream's binaries. They don't call this
   the reference binaries.

Ok, upstream - ack.

 * Documentation can be split into an architecture guide which should be
   a single source of truth and document interfaces only, and an operations
   guide, which will focus on the upstream operations story. Distros should
   provide sub-sections for that story to document their differences.
   They should not, however, be putting distro specific interfaces in the
   architecture documentation, and we would reject such things until they
   are known to work upstream.

Ok.

I'll leave this a few more days to see if more data points arrive, but
it seems largely slanted in this direction.

That said, I wish there were some way to assess the cost/benefits in
terms of OpenStack adoption - which is in a way an operating system
itself - consider VMWare - there's /one/ VMWare, no matter which org
you buy it from, and which addons or integrations or support that org
does.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] consistency vs packages in TripleO

2014-02-14 Thread Robert Collins
On 15 February 2014 08:42, Dan Prince dpri...@redhat.com wrote:


 Let me restate the options the way I see it:

 Option A is we do our job... by making it possible to install OpenStack using 
 various distributions using a set of distro agnostic tools (TripleO).

So our job is to be the upstream installer. If upstream said 'we only
support RHEL and Ubuntu, our job would arguably end there. And in
fact, we're being much more inclusive than many other parts of
upstream - Suse isn't supported upstream, nor is Ubuntu non-LTS, nor
Fedora.

 Option B is we make our job easy by strong arming everyone into the same 
 defaults of our upstream choosing.

Does Nova strong arm everyone into using kvm? Its the default. Or
keystone into using the SQL token store - its the default?

No - defaults are not strong arming. But the defaults are obviously
defaults, and inherited by downstreams. And some defaults are larger
than others -  we've got well defined interfaces in OpenStack, which
have the primary characteristic of 'learn once, apply everywhere' -
even though in principle you can replace them. At the low level REST
and message-bus RPCs, at a level up Keystone and more recently Nova
and Neutron have become that as we get higher order code like Heat and
Savanna that depend on them. I hope none would replace Nova with
Eucalyptus and then say they're running OpenStack - in the same way
we're both defining defaults, *and* building interfaces. *That* is our
job - making OpenStack *upstream* deployable, in the places, and on
the platforms, with the options, that our users want.

Further to that, upstream we're making choices with the thoughts of
our *users* in mind - both cloud consumers and cloud operators. They
are why we ask questions like 'is having every install have
potentially different usernames for the nova service a good idea'. The
only answer so far has been 'because distros have chosen different
usernames already and we need to suck it up'. Thats not a particularly
satisfying answer.

 Does option B look appealing? Perhaps at first glance. By taking away the 
 differences it seems like we are making everyone's lives easier by 
 streamlining our depoyment codebase. There is this one rub though: it isn't 
 what users expect.

I don't know what users expect: There's an assumption stated in some
of the reponses that people which choose 'TripleO + Packages' do that
for a reason. I think this is likely going to be wrong much of the
time. Why? Because upstream doesn't offer someone to ring when there
is a problem. So people will grab RDO, or Suse's offering, or
Rackspace Private Cloud, or HP Cloud OS, or
$distribution-of-openstack-of-choice : and I don't expect for most
people that 'and we used a nova deb package' vs 'and we used a nova
pip package' is going to be *why* they choose that vendor, so as a
result many people will get TripleO+Packages because their vendor
chose that for them. That places a responsibility on the vendors and
on us. The vendors need to understand the consequences of their
packages varying trivially from upstream - the
every-unix-is-a-little-different death of a thousand cuts problem -
and we need to help vendors understand the drivers that lead them to
need to build images via packages.

 Take /mnt/state for example. This isn't the normal place for things to live. 
 Why not use the read only root mechanism some distributions already have and 
 work with that instead. Or perhaps have /mnt/state as a backup solution which 
 can be used if a mechanism doesn't exist or is faulty?

Currently we have two options for upgrading images. A) /mnt/state, B)
a SAN + cinder. We haven't tested B), and I expect for many installs B
won't be an option. /mnt/state is 100% technical, as no other options
exist - none of the Linux distro 'read only root' answers today answer
the problem /mnt/state solves in a way compatible with Nova.

 In the end I think option A is the way we have to go. Is it more work... 
 maybe. But in the end users will like us for it. And there is always the case 
 that by not reimplementing some of the tools and mechanisms which already 
 exist in distros that this ends up being less work anyways. I do hope so...

Certainly we're trying very hard to keep things we reimplement minimal
and easily swap-outable (like o-r-c which I expect some deployments
will want to replace with chef/puppet).

 As for the reference implementation part of A I do hope we choose it wisely 
 and that as much as possible the distros take note of our choices.

+1

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Question about Zuul's role in Solum

2014-02-14 Thread Robert Collins
On 15 February 2014 14:34, Fox, Kevin M kevin@pnnl.gov wrote:
 I think a lot of projects don't bother to gate, because its far to much work 
 to set up a workable system.

 I can think of several projects I've worked on that would benefit from it but 
 haven't because of time/cost of setting it up.

 If I could just say solum create project foo and get it, I'm sure it would 
 be much more used.

 The same has been said of Unit tests and CI in the past. We don't need it. 
 When you give someone a simple to use system though, they see its value 
 pretty quickly.

 Yeah, gerrit and jenkins are a pain to setup. Thats one of the things that 
 might make solum great. That it removes that pain.

Gating is hard, so we should do more of it.

+1 on gating by default, rather than being nothing more than a remote
git checkout - there are lots of those systems already, and being one
won't make solum stand out,.

-Rob


-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request for testing new cloud foundation layer on bare metal

2014-02-14 Thread Robert Collins
I'm sorry if this sounds rude, but I've been seeing your emails come
in, and I've read your website, and I still have 0% clue about what
PetiteCloud is.

On 12 February 2014 21:56, Aryeh Friedman aryeh.fried...@gmail.com wrote:
 PetiteCloud is a 100% Free Open Source and Open Knowledge bare metal capable
 Cloud Foundation Layer for Unix-like operating systems. It has the following
 features:

What is a Cloud Foundation Layer? Whats the relevance of OK here (I
presume you mean http://okfn.org/ ?).

 * Support for bhyve (FreeBSD only) and QEMU
 * Any x86 OS as a guest (FreeBSD and Linux via bhyve or QEMU; all others
 via QEMU only) and all supported software (including running OpenStack on
 VM's)
 * Install, import, start, stop and reboot instances safely (guest OS
 needs to be controlled independently)
 * Clone, backup/export, delete stopped instances 100% safely

So far it sounds like a hypervisor management layer - which is what Nova is.

 * Keep track of all your instances on one screen

I think you'll need a very big screen eventually :)

 * All transactions that change instance state are password protected at
 all critical stages
 * Advanced options:
 * Ability to use/make bootable bare metal disks for backing stores
 * Multiple NIC's and disks
 * User settable (vs. auto assigned) backing store locations

if backing store == virtual disk, this sounds fairly straight forward,
though 'bootable bare metal disks' is certainly an attention grabbing
statement for a hypervisor.

 * A growing number of general purpose and specialized
 instances/applications are available for PetiteCloud

 We would like to know if people a) find this useful and b) does it live up
 to it's claims for a wide variety of open stack installs

I'm not clear what its claims are w.r.t. OpenStack. Is it a testing
/development tool like
https://git.openstack.org/cgit/openstack-dev/devstack ? Is it a
deployment tool like
https://git.openstack.irg/cgit/openstack/tripleo-incubator? Is it a
profiling tool like https://git.openstack.org/cgit/stackforge/rally?

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron py27 test 'FAIL: process-returncode'

2014-02-14 Thread Robert Collins
You can pull the subunit log and use subunit-ls to see the tests that
ran, and do a testr list-tests to get a set of all the tests: the one
that bailed out must be one of the runs that doesn't report having
been run [or possibly the very last one that ran in a worker - but
lets assume not]. So subtract the ones that ran from the ones that
should have run; then run testr run --load-list $YOURLISTOFTESTS. One
or more tests will run and it will stop again. Repeat removing the
tests that ran (using testr last --subunit | subunit1to2 | subunit-ls)
and running until its bailing out immediately (due to testtools having
a deterministic test order [in the absence of optimisers or
randomisers], it will always be running in the same order. Once its
exiting immediately, its probably the first test in the set that you
ran, and you can see the order testtools was going to run via 'testr
list-tests'. If it doesn't exit immediately at any point and you end
up with an empty set, then its probably a two-test interaction - but
they don't tend to exit hard IME, so lets cross that bridge if you hit
it.

-Rob



On 12 February 2014 16:49, Gary Duan garyd...@gmail.com wrote:
 Clark,

 You are right. The test must have been bailed out. The question is what I
 should look for. Even a successful case has a lot of Traceback, ERROR log in
 subunit_log.txt.

 Thanks,
 Gary


 On Tue, Feb 11, 2014 at 7:19 PM, Clark Boylan clark.boy...@gmail.com
 wrote:

 On Tue, Feb 11, 2014 at 7:05 PM, Gary Duan garyd...@gmail.com wrote:
  Hi, Clark,
 
  Thanks for your reply.
 
  I thought the same thing at first, but the page by default only shows
  the
  failed cases. The other 1284 cases were OK.
 
  Gary
 
 
  On Tue, Feb 11, 2014 at 6:07 PM, Clark Boylan clark.boy...@gmail.com
  wrote:
 
  On Tue, Feb 11, 2014 at 5:52 PM, Gary Duan garyd...@gmail.com wrote:
   Hi,
  
   The patch I submitted for L3 service framework integration fails on
   jenkins
   test, py26 and py27. The console only gives following error message,
  
   2014-02-12 00:45:01.710 | FAIL: process-returncode
   2014-02-12 00:45:01.711 | tags: worker-1
  
   and at the end,
  
   2014-02-12 00:45:01.916 | ERROR: InvocationError:
   '/home/jenkins/workspace/gate-neutron-python27/.tox/py27/bin/python
   -m
   neutron.openstack.common.lockutils python setup.py testr --slowest
   --testr-args='
   2014-02-12 00:45:01.917 | ___ summary
   
   2014-02-12 00:45:01.918 | ERROR:   py27: commands failed
  
   I wonder what might be the reason for the failure and how to debug
   this
   problem?
  
   The patch is at, https://review.openstack.org/#/c/59242/
  
   The console output is,
  
  
   http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/console.html
  
   Thanks,
  
   Gary
  
  
   ___
   OpenStack-dev mailing list
   OpenStack-dev@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  
 
  I haven't dug into this too far but
 
 
  http://logs.openstack.org/42/59242/7/check/gate-neutron-python27/e395b06/testr_results.html.gz
  seems to offer some clues. Not sure why the console output doesn't
  show the additional non exit code errors (possibly a nonstandard
  formatter? or a bug?).
 
  Also, cases like this tend to be the test framework completely dying
  due to a sys.exit somewhere or similar. This kills the tests and runs
  only a small subset of them which seems to be the case here.
 
  Clark
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 I picked a different neutron master change and it ran 10k py27
 unittests. Pretty sure the test framework is bailing out early here.

 Clark

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] User Signup

2014-02-14 Thread Soren Hansen
Den 15/02/2014 00.19 skrev Adam Young ayo...@redhat.com:
 Could you please spend 5 minutes on the blueprint
https://blueprints.launchpad.net/horizon/+spec/user-registration and add
your suggestions in the white board.
 Does it make sense for this to be in Keystone first, and then Horizon
just consumes it?  I would think that user-registration-request would be
a reasonable Keystone extension.  Then, you would add a role
user-approver  for a specific domain to approve a user, which would
trigger the create event.

This makes perfect sense to me.

/Soren
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VPC Proposal

2014-02-14 Thread Martin, JC
Harshad,

I'm not sure to understand what you mean by :
 However many of these concepts are not exposed to a AWS customers and
 the API work well.

So for example in :

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html#VPC_EIP_EC2_Differences

When it says : 
When you allocate an EIP, it's for use only in a VPC.

Are you saying that the behavior of your API would be consistent without 
scoping the external networks to a VPC and using the public pool instead ?

I believe that your api may work for basic features on a small deployments with 
only one VPC, but as soon as you have complex setups with external gateways 
that need to be isolated, I'm not sure that it will provide parity anyway with 
what EC2 provides.


Maybe I missed something.


JC

On Feb 14, 2014, at 7:35 PM, Harshad Nakil hna...@contrailsystems.com wrote:

 Hi JC,
 
 You have put it aptly. Goal of the blueprint is to present facade for
 AWS VPC API as the name suggest.
 As per your definition of VPC, shared network will have issues.
 However many of these concepts are not exposed to a AWS customers and
 the API work well.
 While we work incrementally towards your definition of VPC we can
 maintain API compatibility to AWS API that we are proposing. As we are
 subset of your proposal and don't expose all features within VPC.
 
 Regards
 -Harshad
 
 
 On Feb 14, 2014, at 6:22 PM, Martin, JC jch.mar...@gmail.com wrote:
 
 Rudra,
 
 I do not agree that the current proposal provides the semantic of a VPC. If 
 the goal is to only provide a facade through the EC2 API, it may address 
 this, but unless you implement the basic features of a VPC, what good is it 
 doing ?
 
 I do believe that the work can be done incrementally if we agree on the 
 basic properties of a VPC, for example :
  - allowing projects to be created while using resources defined at the VPC 
 level
  - preventing resources not explicitly defined at the VPC level to be used 
 by a VPC.
 
 I do not see in the current proposal how resources are scoped to a VPC, and 
 how, for example, you prevent shared network to be used within a VPC, or how 
 you can define shared networks (or other shared resources) to only be scoped 
 to a VPC.
 
 I think we already raised our concern to you several months ago, but it did 
 not seem to have been addressed in the current proposal.
 
 thanks,
 
 JC
 
 On Feb 14, 2014, at 3:50 PM, Rudra Rugge rru...@juniper.net wrote:
 
 Hi JC,
 
 We agree with your proposed model of a VPC resource object. Proposal you 
 are making makes sense to us and we would like to collaborate further on 
 this. After reading your blueprint two things come to mind.
 
 1. VPC vision for Openstack? (Your blueprint is proposing this vision)
 2. Providing AWS VPC api compatibility with current constrains of openstack 
 structure.
 
 The blueprint that we proposed targets #2.
 It gives a way to implement AWS VPC api compatible API. This helps subset 
 of customers to migrate their workloads from AWS to openstack based clouds. 
 In our implementation we tied VPC to project. That was easiest way to keep 
 isolation with current structure. We agree that what you are proposing is 
 more generic. One to way is to implement our current proposal to have one 
 VPC to one project mapping. As your blueprint matures we will
 move VPC to multiple project mapping.
 
 We feel that instead of throwing away all the work done we can take an 
 incremental approach.
 
 Regards,
 Rudra
 
 
 On Feb 14, 2014, at 11:09 AM, Martin, JC jch.mar...@gmail.com wrote:
 
 
 There is a Blueprint targeted for Icehouse-3 that is aiming to implement 
 the AWS VPC api. I don't think that this blueprint is providing the 
 necessary constructs to really implement a VPC, and it is not taking into 
 account the domains, or proposed multi tenant hierarchy. In addition, I 
 could not find a discussion about this topic leading to the approval.
 
 For this reason, I wrote an 'umbrella' blueprint to hopefully start the 
 discussion on how to really implement VPC, and eventually split it into 
 multiple real blueprints for each area.
 
 Please, provide feedback on the following document, and on the best way to 
 move this forward.
 
 https://wiki.openstack.org/wiki/Blueprint-VPC
 
 Thanks,
 
 JC.
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [nova][EC2] attach and detach volume response status

2014-02-14 Thread wu jiang
Hi,

I checked the AttachVolume in AWS EC2:
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-AttachVolume.html

The status returned is 'attaching':

AttachVolumeResponse xmlns=http://ec2.amazonaws.com/doc/2013-10-15/;
  requestId59dbff89-35bd-4eac-99ed-be587EXAMPLE/requestId
  volumeIdvol-1a2b3c4d/volumeId
  instanceIdi-1a2b3c4d/instanceId
  device/dev/sdh/device
  statusattaching/status
  attachTime-MM-DDTHH:MM:SS.000Z/attachTime
/AttachVolumeResponse


So I think it's a bug IMO.Thanks~


wingwj


On Sat, Feb 15, 2014 at 11:35 AM, Rui Chen chenrui.m...@gmail.com wrote:

 Hi Stackers;

 I use Nova EC2 interface to attach a volume, attach success, but volume
 status is detached in message response.

 # euca-attach-volume -i i-000d -d /dev/vdb vol-0001
 ATTACHMENT  vol-0001i-000d  detached

 This make me confusion, I think the status should be attaching or in-use.

 I find attach and detach volume interfaces return volume['attach_status'],
 but describe volume interface return volume['status']

 Is it a bug? or for other considerations I do not know.

 Thanks

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-14 Thread shihanzhang
I'm  interested in it. UTC8.
At 2014-02-15 00:31:47,punal patel punal.pa...@gmail.com wrote:

I am interested. UTC - 8.



On Fri, Feb 14, 2014 at 1:48 AM, Nick Ma skywalker.n...@gmail.com wrote:
I'm also interested in it. UTC8.

--

cheers,
Li Ma



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The BP for the persistent resource claim

2014-02-14 Thread Jiang, Yunhong
Hi, Brian
I created the BP for persistent resource claim at 
https://blueprints.launchpad.net/nova/+spec/persistent-resource-claim , based 
on the discussion on IRC, can you please have a look on it to see any potential 
issue? 

Thanks
--jyh 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request for testing new cloud foundation layer on bare metal

2014-02-14 Thread Aryeh Friedman
We apologize for the unclearness of our wording both here and on
our site (http://www.petitecloud.org).  Over the next few weeks we will
work on improving our descriptions of various aspects of what PetiteCloud
is and what it is not.  We will also add a set of tutorials showing what a
cloud foundation layer (CFL) is and how it can make OpenStack more stable
and robust in non-data-center environments.  In the meantime, hopefully my
answers below will help with some immediate clarification.

For general answers as to what a CFL is, see our 25 words or less
answer on our site (http://petitecloud.org/cloudFoundation.jsp) or see the
draft notes for a forthcoming white paper on the topic (
http://lists.petitecloud.nyclocal.net/private.cgi/petitecloud-general-petitecloud.nyclocal.net/attachments/20140213/3fee4df0/attachment-0001.pdf).
OpenStack does not currently have a cloud foundation layer of its own
(creating one might be a good sub-project for OpenStack).

Your specfic questions are answered inline:



On Fri, Feb 14, 2014 at 11:28 PM, Robert Collins
robe...@robertcollins.netwrote:

 I'm sorry if this sounds rude, but I've been seeing your emails come
 in, and I've read your website, and I still have 0% clue about what
 PetiteCloud is.

 On 12 February 2014 21:56, Aryeh Friedman aryeh.fried...@gmail.com
 wrote:
  PetiteCloud is a 100% Free Open Source and Open Knowledge bare metal
 capable
  Cloud Foundation Layer for Unix-like operating systems. It has the
 following
  features:

 What is a Cloud Foundation Layer? Whats the relevance of OK here (I
 presume you mean http://okfn.org/ ?).



 We have no connection with the above site. Personally we agree with its
goals, but our use of the term Open Knowledge is different and pertains
only to technical knowledge. See our web site for details on what we mean
by that term. http://petitecloud.org/fosok.jsp



  * Support for bhyve (FreeBSD only) and QEMU
  * Any x86 OS as a guest (FreeBSD and Linux via bhyve or QEMU; all
 others
  via QEMU only) and all supported software (including running OpenStack on
  VM's)
  * Install, import, start, stop and reboot instances safely (guest OS
  needs to be controlled independently)
  * Clone, backup/export, delete stopped instances 100% safely

 So far it sounds like a hypervisor management layer - which is what Nova
 is.


 Nova is for running end user instances. PetiteCloud is designed (see
below) to run instances that OpenStack can run on and then partition into
end-user instances.



  * Keep track of all your instances on one screen

 I think you'll need a very big screen eventually :)

 Not a huge one.  A CFL needs to run only a relatively small number of
instances itself. Remember that a cloud foundation layer's instances can be
used as hosts (a.k.a. nodes) for a full-fledged IAAS platform such as
OpenStack. Thus, for example, a set of just four PetiteCloud instances
might serve as the complete compute, networking, storage, etc. nodes for an
OpenStack installation which in turn is running, say 10 instances.
Addtional compute, storage and/or hybrid nodes (real and virtual) can be
added to the deploy via any combination of bare metal openstack nodes and
CFL'ed ones. Since PetiteCloud does not, yet, have any API hooks you would
need to limit this to a small number of PetiteCloud hosts.



  * All transactions that change instance state are password protected
 at
  all critical stages
  * Advanced options:
  * Ability to use/make bootable bare metal disks for backing
 stores
  * Multiple NIC's and disks
  * User settable (vs. auto assigned) backing store locations

 if backing store == virtual disk, this sounds fairly straight forward,
 though 'bootable bare metal disks' is certainly an attention grabbing
 statement for a hypervisor.


 As explained in the white paper, since we are a full layer 0 cloud
platform instead of just a hypervisor manager we can do stuff that would
normally not be possible for a unmanaged hypervisor (or even wise if not
managed by a full layer 0 platform). One of them is you can make the
storage target of your layer 0 instances be a physical disk. Additionally
since petitecloud does not require any guest modifications when you
install the OS (which is managed by the hypervisor) you can make your root
disk be a physical drive. You can take this to some really interesting
extremes like one of our core team members (not me) posted a few nights ago
to our mailing list how to make a cloud on a stick.
http://lists.petitecloud.nyclocal.net/private.cgi/petitecloud-general-petitecloud.nyclocal.net/2014-February/000106.htmlNamely
how have a bootable USB drive that contains your entire cloud.



  * A growing number of general purpose and specialized
  instances/applications are available for PetiteCloud
 
  We would like to know if people a) find this useful and b) does it live
 up
  to it's claims for a wide variety of open stack 

Re: [openstack-dev] [Neutron] Interest in discussing vendor plugins for L3 services?

2014-02-14 Thread Yi Sun
I'm interested in it. UTC8


On Fri, Feb 14, 2014 at 11:01 PM, shihanzhang ayshihanzh...@126.com wrote:

 I'm  interested in it. UTC8.


 At 2014-02-15 00:31:47,punal patel punal.pa...@gmail.com wrote:

 I am interested. UTC - 8.


 On Fri, Feb 14, 2014 at 1:48 AM, Nick Ma skywalker.n...@gmail.com wrote:

 I'm also interested in it. UTC8.

 --

 cheers,
 Li Ma


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Android-x86
http://www.android-x86.org
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev