Re: [openstack-dev] [heat] Remove deprecation properties

2015-01-18 Thread Angus Salkeld
On Fri, Jan 16, 2015 at 11:10 PM, Sergey Kraynev skray...@mirantis.com
wrote:

 Steve, Thanks for the feedback.

 On 16 January 2015 at 15:09, Steven Hardy sha...@redhat.com wrote:

 On Thu, Dec 25, 2014 at 01:52:43PM +0400, Sergey Kraynev wrote:
 Hi all.
 In the last time we got on review several patches, which removes old
 deprecation properties [1],A
 and one mine [2].
 The aim is to delete deprecated code and redundant tests. It looks
 simple,
 but the main problem, which we met, is backward compatibility.
 F.e. user has created resource (FIP) with old property schema, i.e.
 using
 SUBNET_ID instead of SUBNET. On first look nothing bad will not
 happen,
 because:

 FWIW I think it's too soon to remove the Neutron subnet_id/network_id
 properties, they were only deprecated in Juno [1], and it's evident that
 users are still using them on icehouse [2]

 I thought the normal deprecation cycle was at least two releases, but I
 can't recall where I read that.  Considering the overhead of maintaining
 these is small, I'd favour leaning towards giving more time for users to
 update their templates, rather then breaking them via very aggresive
 removal of deprecated interfaces.


 Honestly I thought, that we use 1 release cycle, but I have not any
 objections to do it after two releases.
 Will  be glad to know what is desired deprecation period.



 I'd suggest some or all of the following:

 - Add a planned for removal in $release to the SupportStatus string
   associated with the deprecation, so we can document the planned removal.
 - Wait for at least two releases between deprecation and removal, and
   announce the interfaces which will be removed in the release notes for
   the release before removal e.g:
 - Deprecated in Juno
 - Announce planned removal in Kilo release notes
 - Remove in J


 I like this idea, IMO it will make our deprecation process clear.





 [1] https://review.openstack.org/#/c/82853/
 [2]
 http://lists.openstack.org/pipermail/openstack/2015-January/011156.html

 1. handle_delete use resource_id and any changes in property schema
 does
 not affect other actions.
 2. If user want to use old template, he will get adequate error
 message,
 that this property is not presented in schema. After that he just
 should
 switch to new property and update stack using this new property.
 In the same time we have one big issues for shadow dependencies,
 which is
 actual for neutron resources. The simplest approach will not be
 worked
 [3], due to old properties was deleted from property_schema.
 Why is it bad ?
 - we will get again all bugs related with such dependencies.
 - how to make sure:A
 A  A  - create stack with old property (my template [4])
 A  A  - open horizon, and look on topology
 A  A  - download patch [2] and restart engine
 A  A  - reload horizon page with topology
 A  A  - as result it will be different
 A
 I have some ideas about how to solve this, but none of them is not
 enough
 good for me:
 A - getting such information from self.properties.data is bad,
 because we
 will skip all validations mentioned in properties.__getitem__
 A - renaming old key in data to new or creating copy with new key is
 not
 correct for me, because in this case we actually change properties
 (resource representation) invisibly from user.
 A - as possible we may leave old deprecated property and mark it
 something
 like (removed), which will have similar behavior such as for
 implemented=False. I do not like it, because it means, that we never
 remove this support code, because wants to be compatible with old
 resources. (User may be not very lazy to do simple update or
 something
 else ...)
 - last way, which I have not tried yet, is usingA
 _stored_properties_data
 forA extraction necessary information.
 So now I have the questions:
 Should we support such case with backward compatibility?A
 If yes, how will be better to do it for us and user?
 May be we should create some strategy forA removingA A deprecated
 properties ?

 Yeah, other than the process issues I mentioned above, Angus has pointed
 out some technical challenges which may mean property removal breaks
 existing stacks.  IMHO this is something we *cannot* do - folks must be
 able to upgrade heat over multiple versions without breaking their stacks.

 As you say, delete may work, but it's likely several scenarios around
 update will break if the stored stack definition doesn't match the schema
 of the resource, and maintaining the internal references to removed or
 obsolete properties doesn't seem like a good plan long term.

 Could we provide some sort of migration tool, which re-writes the
 definition of existing stacks (via a special patch stack update maybe?)
 before upgrading heat?


 Yeah, I thought about it. Probably it's good solution 

Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Steven, just filed two bps to trace all the discussions for network and
scheduler support for native docker, we can have more discussion there.

https://blueprints.launchpad.net/magnum/+spec/native-docker-network
https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Another I want to discuss is still network, currently, magnum only support
neutron, what about nova-network support?

2015-01-19 0:39 GMT+08:00 Steven Dake sd...@redhat.com:

  On 01/18/2015 09:23 AM, Jay Lau wrote:

 Thanks Steven, more questions/comments in line.

 2015-01-19 0:11 GMT+08:00 Steven Dake sd...@redhat.com:

  On 01/18/2015 06:39 AM, Jay Lau wrote:

   Thanks Steven, just some questions/comments here:

  1) For native docker support, do we have some project to handle the
 network? The current native docker support did not have any logic for
 network management, are we going to leverage neutron or nova-network just
 like nova-docker for this?

  We can just use flannel for both these use cases.  One way to approach
 using flannel is that we can expect docker networks will always be setup
 the same way, connecting into a flannel network.

 What about introducing neutron/nova-network support for native docker
 container just like nova-docker?



 Does that mean introducing an agent on the uOS?  I'd rather not have
 agents, since all of these uOS systems have wonky filesystem layouts and
 there is not an easy way to customize them, with dib for example.

 2) For k8s, swarm, we can leverage the scheduler in those container
 management tools, but what about docker native support? How to handle
 resource scheduling for native docker containers?

   I am not clear on how to handle native Docker scheduling if a bay has
 more then one node.  I keep hoping someone in the community will propose
 something that doesn't introduce an agent dependency in the OS.

 My thinking is as this: Add a new scheduler just like what nova/cinder is
 doing now and then we can migrate to gantt once it become mature, comments?


 Cool that WFM.  Too bad we can't just use gantt out the gate.

 Regards
 -steve



 Regards
 -steve


  Thanks!

 2015-01-18 8:51 GMT+08:00 Steven Dake sd...@redhat.com:

 Hi folks and especially Magnum Core,

 Magnum Milestone #1 should released early this coming week.  I wanted to
 kick off discussions around milestone #2 since Milestone #1 development is
 mostly wrapped up.

 The milestone #2 blueprints:
 https://blueprints.launchpad.net/magnum/milestone-2

 The overall goal of Milestone #1 was to make Magnum usable for
 developers.  The overall goal of Milestone #2 is to make Magnum usable by
 operators and their customers.  To do this we are implementing blueprints
 like multi-tenant, horizontal-scale, and the introduction of coreOS in
 addition to Fedora Atomic as a Container uOS.  We are also plan to
 introduce some updates to allow bays to be more scalable.  We want bays to
 scale to more nodes manually (short term), as well as automatically (longer
 term).  Finally we want to tidy up some of the nit-picky things about
 Magnum that none of the core developers really like at the moment.  One
 example is the magnum-bay-status blueprint which will prevent the creation
 of pods/services/replicationcontrollers until a bay has completed
 orchestration via Heat.  Our final significant blueprint for milestone #2
 is the ability to launch our supported uOS on bare metal using Nova's
 Ironic plugin and the baremetal flavor.  As always, we want to improve our
 unit testing from what is now 70% to ~80% in the next milestone.

 Please have a look at the blueprints and feel free to comment on this
 thread or in the blueprints directly.  If you would like to see different
 blueprints tackled during milestone #2 that feedback is welcome, or if you
 think the core team[1] is on the right track, we welcome positive kudos too.

 If you would like to see what we tackled in Milestone #1, the code
 should be tagged and ready to run Tuesday January 20th.  Master should work
 well enough now, and the developer quickstart guide is mostly correct.

 The Milestone #1 bluerpints are here for comparison sake:
 https://blueprints.launchpad.net/magnum/milestone-1

 Regards,
 -steve


 [1] https://review.openstack.org/#/admin/groups/473,members


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
   Thanks,

  Jay Lau (Guangya Liu)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack 

Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Pipes

On 01/18/2015 11:11 AM, Steven Dake wrote:

On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network
just like nova-docker for this?

We can just use flannel for both these use cases.  One way to approach
using flannel is that we can expect docker networks will always be setup
the same way, connecting into a flannel network.


Note that the README on the Magnum GH repository states that one of the 
features of Magnum is its use of Neutron:


Integration with Neutron for k8s multi-tenancy network security.

Is this not true?


2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay has
more then one node.  I keep hoping someone in the community will propose
something that doesn't introduce an agent dependency in the OS.


So, perhaps because I've not been able to find any documentation for 
Magnum besides the README (the link to developers docs is a 404), I have 
quite a bit of confusion around what value Magnum brings to the 
OpenStack ecosystem versus a tenant just installing Kubernetes on one of 
more of their VMs and managing container resources using k8s directly.


Is the goal of Magnum to basically be like Trove is for databases and be 
a Kubernetes-installation-as-a-Service endpoint?


Thanks in advance for more info on the project. I'm genuinely curious.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sos-ci for cinder scst

2015-01-18 Thread Asselin, Ramy
Hi Nikesh,

Not familiar with sos-ci, but in general, for drivers that are not yet in-tree, 
you’ll want to use the gerrit event patch referenced, then apply (e.g. cherry 
pick) the patch that contains the out-of-tree driver on top.

Ramy

From: Nikesh Kumar Mahalka [mailto:nikeshmaha...@vedams.com]
Sent: Friday, January 16, 2015 11:56 PM
To: OpenStack Development Mailing List (not for usage questions); John Griffith
Cc: Sreedhar Varma
Subject: [openstack-dev] sos-ci for cinder scst

Hi,
localconf.base file in sos-ci/sos-ci/tempates have

CINDER_BRANCH = master
volume_driver=cinder.volume.drivers.solidfire.SolidFireDriver
similarly in our localconf.base file,we have

CINDER_BRANCH = master
[[post-config|$CINDER_CONF]]
[lvmdriver-1]
iscsi_helper=scstadmin
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver

when sos-ci launch instance and try to install
devstack with CINDER_BRANCH=gerrit event patch reference,cinder-volume service 
is unable to start.
Because our code is not in master for this local.conf to be run by 
LVMISCSIDriver.

As far we know,we should not give CINDER_BRANCH=refs/changes/78/145778/1
in our localconf.base,because sos-ci is setting CINDER_BRANCH with gerrit
event stream events patch reference.


Regards
Nikesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] API Definition Formats

2015-01-18 Thread Jay Pipes

On 01/13/2015 07:41 AM, Sean Dague wrote:

On 01/09/2015 04:17 PM, Everett Toews wrote:

One thing that has come up in the past couple of API WG meetings
[1] is just how useful a proper API definition would be for the
OpenStack projects.

By API definition I mean a format like Swagger, RAML, API
Blueprint, etc. These formats are a machine/human readable way of
describing your API. Ideally they drive the implementation of both
the service and the client, rather than treating the format like
documentation where it’s produced as a by product of the
implementation.

I think this blog post [2] does an excellent job of summarizing the
role of API definition formats.

Some of the other benefits include validation of
requests/responses, easier review of API design/changes, more
consideration given to client design, generating some portion of
your client code, generating documentation, mock testing, etc.

If you have experience with an API definition format, how has it
benefitted your prior projects?

Do you think it would benefit your current OpenStack project?


It would hugely benefit OpenStack to have this clear some where that
was readable.

I don't specifically have experience with these, my only feedback
would be make sure whatever format supports having multiple examples
per API call referenced or embedded.

My experience is that API specs aren't typically fully read and
injested. Instead examples are used to get some minimum working
code, then bits are spot referenced and evolved until the client code
looks like it does what was expected. So providing multiple examples
per API will help more people wrap their head around the interface in
question.


This is spot-on, Sean.

I would support making Swagger the API definition format for OpenStack 
APIs. I think it's by far the best of the bunch, in my experience, and 
I've used API Blueprint, Swagger, and RAML.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] gabbi: A tool for declarative testing of APIs

2015-01-18 Thread Jay Pipes

On 01/12/2015 02:20 PM, Chris Dent wrote:


After some discussion with Sean Dague and a few others it became
clear that it would be a good idea to introduce a new tool I've been
working on to the list to get a sense of its usefulness generally,
work towards getting it into global requirements, and get the
documentation fleshed out so that people can actually figure out how
to use it well.

tl;dr: Help me make this interesting tool useful to you and your
HTTP testing by reading this message and following some of the links
and asking any questions that come up.

The tool is called gabbi

 https://github.com/cdent/gabbi
 http://gabbi.readthedocs.org/
 https://pypi.python.org/pypi/gabbi

It describes itself as a tool for running HTTP tests where requests
and responses are represented in a declarative form. Its main
purpose is to allow testing of APIs where the focus of test writing
(and reading!) is on the HTTP requests and responses, not on a bunch of
Python (that obscures the HTTP).

The tests are written in YAML and the simplest test file has this form:

```
tests:
- name: a test
   url: /
```

This test will pass if the response status code is '200'.

The test file is loaded by a small amount of python code which transforms
the file into an ordered sequence of TestCases in a TestSuite[1].

```
def load_tests(loader, tests, pattern):
 Provide a TestSuite to the discovery process.
 test_dir = os.path.join(os.path.dirname(__file__), TESTS_DIR)
 return driver.build_tests(test_dir, loader, host=None,
   intercept=SimpleWsgi,
   fixture_module=sys.modules[__name__])
```

The loader provides either:

* a host to which real over-the-network requests are made
* a WSGI app which is wsgi-intercept-ed[2]

If an individual TestCase is asked to be run by the testrunner, those tests
that are prior to it in the same file are run first, as prerequisites.

Each test file can declare a sequence of nested fixtures to be loaded
from a configured (in the loader) module. Fixtures are context managers
(they establish the fixture upon __enter__ and destroy it upon
__exit__).

With a proper group_regex setting in .testr.conf each YAML file can
run in its own process in a concurrent test runner.

The docs contain information on the format of the test files:

 http://gabbi.readthedocs.org/en/latest/format.html

Each test can state request headers and bodies and evaluate both response
headers and response bodies. Request bodies can be strings in the
YAML, files read from disk, or JSON created from YAML structures.
Response verifcation can use JSONPath[3] to inspect the details of
response bodies. Response header validation may use regular
expressions.

There is limited support for refering to the previous request
to construct URIs, potentially allowing traversal of a full HATEOAS
compliant API.

At the moment the most complete examples of how things work are:

* Ceilometer's pending use of gabbi:
   https://review.openstack.org/#/c/146187/
* Gabbi's testing of gabbi:
   https://github.com/cdent/gabbi/tree/master/gabbi/gabbits_intercept
   (the loader and faked WSGI app for those yaml files is in:
   https://github.com/cdent/gabbi/blob/master/gabbi/test_intercept.py)

One obvious thing that will need to happen is a suite of concrete
examples on how to use the various features. I'm hoping that
feedback will help drive that.

In my own experimentation with gabbi I've found it very useful. It's
helped me explore and learn the ceilometer API in a way that existing
test code has completely failed to do. It's also helped reveal
several warts that will be very useful to fix. And it is fast. To
run and to write. I hope that with some work it can be useful to you
too.


Very impressive, Chris, thanks very much for bringing Gabbi into the 
OpenStack ecosystem. I very much look forward to replacing the API 
samples code in Nova with Gabbi, which looks very clean and 
easily-understandable for anyone.


Best,
-jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread Yee, Guang
+1!

 On Jan 18, 2015, at 3:17 PM, Jamie Lennox jamielen...@redhat.com wrote:
 
 +1
 
 - Original Message -
 From: Morgan Fainberg morgan.fainb...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, 19 January, 2015 5:11:02 AM
 Subject: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec  
 core
 
 Hello all,
 
 I would like to nominate Brad Topol for Keystone Spec core (core reviewer for
 Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been a
 consistent voice advocating for well defined specifications, use of existing
 standards/technology, and ensuring the UX of all projects under the Keystone
 umbrella continue to improve. Brad brings to the table a significant amount
 of insight to the needs of the many types and sizes of OpenStack
 deployments, especially what real-world customers are demanding when
 integrating with the services. Brad is a core contributor on pycadf (also
 under the Keystone umbrella) and has consistently contributed code and
 reviews to the Keystone projects since the Grizzly release.
 
 Please vote with +1/-1 on adding Brad to as core to the Keystone Spec repo.
 Voting will remain open until Friday Jan 23.
 
 Cheers,
 Morgan Fainberg
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Support status of Heat resource types

2015-01-18 Thread Angus Salkeld
On Sun, Jan 18, 2015 at 10:41 PM, Qiming Teng teng...@linux.vnet.ibm.com
wrote:

 Dear all,
 One question we constantly get from Heat users is about the support
 status of resource types. Some users are not well informed of this
 information so that is something we can improve.

 Though some resource types are already labelled with support status,
 there are quite some of them not identified yet. Helps are needed to
 complete the list.


I think the only why to figure these out is using the git history. This is
going to be
tedious work:-(

-Angus


 +--+++
 | name | support_status | since  |
 +--+++
 | AWS::AutoScaling::AutoScalingGroup   || 2014.1 |
 | AWS::AutoScaling::LaunchConfiguration|||
 | AWS::AutoScaling::ScalingPolicy  |||
 | AWS::CloudFormation::Stack   |||
 | AWS::CloudFormation::WaitCondition   || 2014.1 |
 | AWS::CloudFormation::WaitConditionHandle || 2014.1 |
 | AWS::CloudWatch::Alarm   |||
 | AWS::EC2::EIP|||
 | AWS::EC2::EIPAssociation |||
 | AWS::EC2::Instance   |||
 | AWS::EC2::InternetGateway|||
 | AWS::EC2::NetworkInterface   |||
 | AWS::EC2::RouteTable || 2014.1 |
 | AWS::EC2::SecurityGroup  |||
 | AWS::EC2::Subnet |||
 | AWS::EC2::SubnetRouteTableAssociation|||
 | AWS::EC2::VPC|||
 | AWS::EC2::VPCGatewayAttachment   |||
 | AWS::EC2::Volume |||
 | AWS::EC2::VolumeAttachment   |||
 | AWS::ElasticLoadBalancing::LoadBalancer  |||
 | AWS::IAM::AccessKey  |||
 | AWS::IAM::User   |||
 | AWS::RDS::DBInstance |||
 | AWS::S3::Bucket  |||
 | My::TestResource |||
 | OS::Ceilometer::Alarm|||
 | OS::Ceilometer::CombinationAlarm || 2014.1 |
 | OS::Cinder::Volume   |||
 | OS::Cinder::VolumeAttachment |||
 | OS::Glance::Image|| 2014.2 |
 | OS::Heat::AccessPolicy   |||
 | OS::Heat::AutoScalingGroup   || 2014.1 |
 | OS::Heat::CloudConfig|| 2014.1 |
 | OS::Heat::HARestarter| DEPRECATED ||
 | OS::Heat::InstanceGroup  |||
 | OS::Heat::MultipartMime  || 2014.1 |
 | OS::Heat::RandomString   || 2014.1 |
 | OS::Heat::ResourceGroup  || 2014.1 |
 | OS::Heat::ScalingPolicy  |||
 | OS::Heat::SoftwareComponent  || 2014.2 |
 | OS::Heat::SoftwareConfig || 2014.1 |
 | OS::Heat::SoftwareDeployment || 2014.1 |
 | OS::Heat::SoftwareDeployments|| 2014.2 |
 | OS::Heat::Stack  |||
 | OS::Heat::StructuredConfig   || 2014.1 |
 | OS::Heat::StructuredDeployment   || 2014.1 |
 | OS::Heat::StructuredDeployments  || 2014.2 |
 | OS::Heat::SwiftSignal|| 2014.2 |
 | OS::Heat::SwiftSignalHandle  || 2014.2 |
 | OS::Heat::UpdateWaitConditionHandle  || 2014.1 |
 | OS::Heat::WaitCondition  || 2014.2 |
 | OS::Heat::WaitConditionHandle|| 2014.2 |
 | OS::Neutron::Firewall|||
 | OS::Neutron::FirewallPolicy  |||
 | OS::Neutron::FirewallRule|||
 | OS::Neutron::FloatingIP  |||
 | OS::Neutron::FloatingIPAssociation   |||
 | OS::Neutron::HealthMonitor   |   

Re: [openstack-dev] Notification Schemas ...

2015-01-18 Thread Jay Pipes

On 01/18/2015 04:39 PM, Sandy Walsh wrote:

Hey y'all

Eddie Sheffield has pulled together a strawman set of notification
schemas for Nova and Glance. Looks like a great start for further
discussion. He's going to add JSON-Schema validation next as a form
of unit test. Then I guess we have to start thinking about a library
to digest these and help us build validated notifications.

Please have a peek. Bend, spindle, mutilate as required. Pull
requests welcome.

(we also have to figure out where it's going to live, this is just a
parking spot)

https://github.com/StackTach/notification-schemas


Thanks Sandy! And thanks Eddie for putting together the strawman!

Some important things that I see are missing, so far... please let me 
know what your thoughts are regarding these.


1) There needs to be some method of listing the notification codes. By 
code, I mean compute.instance_create.start, or possibly the CADF 
event codes, which I believe I recommended way back a-when the original 
ML thread started.


2) Each notification message payload must contain a version in it. We 
need some ability to evolve the notification schemas over time, and a 
version in the payload is a pre-requisite for that.


All the best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Replication on image create

2015-01-18 Thread joehuang
(Replace the word synchronization to replication to reduce misunderstanding)

I also recommend approach #2.

For, approach #1, 
1) You have to maintain a state machine if you want to replicate the image to 3 
or more backend. 
2) Not always 3 or more backend will be replicated successfully, unless you 
make it like a transaction, it's hard to process the replication broken, and 
re-replicate.
3) As more and more backend, the transaction will often failed to replicate the 
image to all destination.

For approach #2
The image status is required to enhanced to reflect the image availability for 
each location. And the consumer of the glance api can check to see whether the 
image is ready for specific location, if not ready, either trigger a 
replication immediately or report failure. 

If the image is available only after all backend have been replicated, the end 
user experience is good, but you have to wait for all location is ready, and 
it's not easy to do that: considering the replication broken, backend leave and 
join. the more backend, the harder is.

Another recommendation is to trigger the replication on demand: when the first 
VM using this image is booted, the image will be replicated to the proper 
backend for the new VM on demand. The shortage for this approach is that the 
first VM booting will last longer than usual, but the process is much more 
stable.

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Wednesday, January 14, 2015 10:25 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance] Replication on image create

On 01/13/2015 04:55 PM, Boden Russell wrote:
 Looking for some feedback from the glance dev team on a potential BP…

 The use case I’m trying to solve is —

 As an admin, I want my glance image bits replicated to multiple store 
 locations (of the same store type) during a glance create operation.

 For example, I have 3 HTTP based backend locations I want to store my 
 glance image bits on. When I create / upload a new glance image, I 
 want those bits put onto all 3 HTTP based locations and have the 
 image's 'locations' metadata properly reflect all stored locations.

 There are obviously multiple approaches to getting this done.

 [1] Allow per glance store drivers the ability to manage config and 
 connectivity to multiple backends. For example in the glance-api.conf:

 [DEFAULT]
 store_backends = http1,http2,http3
 ...
 [http1]
 # http 1 backend props
 ...
 [http2]
 # http 2 backend props
 ...
 [http2]
 # http 2 backend props
 ...

 And then in the HTTP store driver use a configuration approach like 
 cinder multi-backend does (e.g.:
 https://github.com/openstack/cinder/blob/2f09c3031ef2d2db598ec4c56f6127e33d29b2cc/cinder/volume/configuration.py#L52).
 Here, the store driver handles all the logic w/r/t pushing the image 
 bits to all backends, etc..

The problem with this solution is that the HTTP Glance storage backend is 
readonly. You cannot upload an image to Glance using the http backend.

 [2] A separate (3rd party) process which handles the image 
 replication and location metadata updates... For example listens for 
 the glance notification on create and then takes the steps necessary 
 to replicate the bits elsewhere and update the image metadata (locations).

This is the solution that I would recommend. Frankly, this kind of replication 
should be an async out-of-band process similar to bittorrent. Just have 
bittorrent or rsync or whatever replicate the image bits to a set of target 
locations and then call the
glanceclient.v2.client.images.add_location() method:

https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L211

to add the URI of the replicated image bits.

 [3] etc...


 In a prototype I implemented #1 which can be done with no impact 
 outside of the store driver code itself.

I'm not entirely sure how you did that considering the http storage backend is 
readonly. Are you saying you implemented the add() method for the 
glance_store._drivers.http.Store class?

Best,
-jay

  I prefer #1 over #2 given approach #2
 may need pull the image bits back down from the initial location in 
 order to push for replication; additional processing.

 Is the dev team adverse to option #1 for the store driver's who wish 
 to implement it and / or what are the other (preferred) options here?


 Thank you,
 - boden


 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Steven Dake

On 01/18/2015 07:59 PM, Jay Pipes wrote:

On 01/18/2015 11:11 AM, Steven Dake wrote:

On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network
just like nova-docker for this?

We can just use flannel for both these use cases.  One way to approach
using flannel is that we can expect docker networks will always be setup
the same way, connecting into a flannel network.


Note that the README on the Magnum GH repository states that one of 
the features of Magnum is its use of Neutron:


Integration with Neutron for k8s multi-tenancy network security.

Is this not true?


Jay,

We do integrate today with Neutron for multi-tenant network security.  
Flannel runs on top of Neutron networks using vxlan. Neutron provides 
multi-tenant security; Flannel provides container networking.  Together, 
they solve the multi-tenant container networking problem in a secure way.


Its a shame these two technologies can't be merged at this time, but we 
will roll with it until someone invents an integration.



2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay has
more then one node.  I keep hoping someone in the community will propose
something that doesn't introduce an agent dependency in the OS.


So, perhaps because I've not been able to find any documentation for 
Magnum besides the README (the link to developers docs is a 404), I 
have quite a bit of confusion around what value Magnum brings to the 
OpenStack ecosystem versus a tenant just installing Kubernetes on one 
of more of their VMs and managing container resources using k8s directly.


Agree documentation is dearth at this point.  The only thing we really 
have at  this time is the developer guide here:

https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst

Installing Kubernetes in one or more of their VMs would also work with 
kubernetes.  In fact, you can do this easily today with larsks 
heat-kubernetes Heat template which we shamelessly borrowed without 
magnum at all.


We do intend to offer bare metal deployment of kubernetes as well, which 
should offer a significant I/O performance advantage, which is after all 
what cloud services are all about.


Of course someone could just deploy kubernetes themselves on bare metal, 
but there isn't at this time an integrated tool to provide 
Kubernetes-installation-as-a-service endpoint.  Magnum does that job 
today right now on master.  I suspect it can and will do more as we get 
past our 2 month mark of development ;)



Is the goal of Magnum to basically be like Trove is for databases and 
be a Kubernetes-installation-as-a-Service endpoint?


I believe that is how the project vision started out.  I'm not clear on 
the long term roadmap - I suspect there is alot more value that can be 
added in.  Some of these things, like manually or automatically scaling 
the infrastructure show some of our future plans.  I'd appreciate your 
suggestions.



Thanks in advance for more info on the project. I'm genuinely curious.



Always a pleasure,
-steve


Best,
-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Pipes

On 01/18/2015 11:02 PM, Steven Dake wrote:

On 01/18/2015 07:59 PM, Jay Pipes wrote:

On 01/18/2015 11:11 AM, Steven Dake wrote:

On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network
just like nova-docker for this?

We can just use flannel for both these use cases.  One way to approach
using flannel is that we can expect docker networks will always be setup
the same way, connecting into a flannel network.


Note that the README on the Magnum GH repository states that one of
the features of Magnum is its use of Neutron:

Integration with Neutron for k8s multi-tenancy network security.

Is this not true?


Jay,

We do integrate today with Neutron for multi-tenant network security.
Flannel runs on top of Neutron networks using vxlan. Neutron provides
multi-tenant security; Flannel provides container networking.  Together,
they solve the multi-tenant container networking problem in a secure way.


Gotcha. That makes sense, now.


Its a shame these two technologies can't be merged at this time, but we
will roll with it until someone invents an integration.


2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay has
more then one node.  I keep hoping someone in the community will propose
something that doesn't introduce an agent dependency in the OS.


So, perhaps because I've not been able to find any documentation for
Magnum besides the README (the link to developers docs is a 404), I
have quite a bit of confusion around what value Magnum brings to the
OpenStack ecosystem versus a tenant just installing Kubernetes on one
of more of their VMs and managing container resources using k8s directly.


Agree documentation is dearth at this point.  The only thing we really
have at  this time is the developer guide here:
https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst

Installing Kubernetes in one or more of their VMs would also work with
kubernetes.  In fact, you can do this easily today with larsks
heat-kubernetes Heat template which we shamelessly borrowed without
magnum at all.

We do intend to offer bare metal deployment of kubernetes as well, which
should offer a significant I/O performance advantage, which is after all
what cloud services are all about.

Of course someone could just deploy kubernetes themselves on bare metal,
but there isn't at this time an integrated tool to provide
Kubernetes-installation-as-a-service endpoint.  Magnum does that job
today right now on master.  I suspect it can and will do more as we get
past our 2 month mark of development ;)


Ha! No worries, Steven. :) Heck, I have enough trouble just keeping up 
with the firehose of information about new container-related stuffs that 
I'm well impressed with the progress that the container team has made so 
far. I just wish I had ten more hours a day to read and research more on 
the topic!



Is the goal of Magnum to basically be like Trove is for databases and
be a Kubernetes-installation-as-a-Service endpoint?


I believe that is how the project vision started out.  I'm not clear on
the long term roadmap - I suspect there is alot more value that can be
added in.  Some of these things, like manually or automatically scaling
the infrastructure show some of our future plans.  I'd appreciate your
suggestions.


Well, when I wrap my brain around more of the container technology, I 
will certainly try and provide some feedback! :)


Best,
-jay


Thanks in advance for more info on the project. I'm genuinely curious.



Always a pleasure,
-steve


Best,
-jay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread Steve Martinelli
+1

Steve

Morgan Fainberg morgan.fainb...@gmail.com wrote on 01/18/2015 02:11:02 
PM:

 From: Morgan Fainberg morgan.fainb...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: 01/18/2015 02:15 PM
 Subject: [openstack-dev] [Keystone] Nominating Brad Topol for 
 Keystone-Spec core
 
 Hello all,
 
 I would like to nominate Brad Topol for Keystone Spec core (core 
 reviewer for Keystone specifications and API-Specification only: 
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has 
 been a consistent voice advocating for well defined specifications, 
 use of existing standards/technology, and ensuring the UX of all 
 projects under the Keystone umbrella continue to improve. Brad 
 brings to the table a significant amount of insight to the needs of 
 the many types and sizes of OpenStack deployments, especially what 
 real-world customers are demanding when integrating with the 
 services. Brad is a core contributor on pycadf (also under the 
 Keystone umbrella) and has consistently contributed code and reviews
 to the Keystone projects since the Grizzly release.
 
 Please vote with +1/-1 on adding Brad to as core to the Keystone 
 Spec repo. Voting will remain open until Friday Jan 23.
 
 Cheers,
 Morgan Fainberg
 
__
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Neutron ML2][VMWare]NetworkNotFoundForBridge: Network could not be found for bridge br-int

2015-01-18 Thread Foss Geek
Hi Xarses,

Thanks for your time!

I was not able to check my mail yesterday. Sorry for the delay.

One of my colleague fixed this issue yesterday. I will understand the issue
and update this thread.

-- 
Thanks  Regards
E-Mail: thefossg...@gmail.com
IRC: neophy
Blog : http://lmohanphy.livejournal.com/



On Sat, Jan 17, 2015 at 1:17 AM, Andrew Woodward xar...@gmail.com wrote:

 neophy,

 It seems like there are left overs that fuel was using in the config
 that would not be present when you installed neutron fresh. I'd
 compare the config files and start backing out bits you dont need. I'd
 start with the lines refrencing br-int, you dont need them on nodes
 that aren't using the ovs agent.

 Poke me on IRC if you need more help

 Xarses (GMT-8)

 On Fri, Jan 9, 2015 at 1:08 PM, Foss Geek thefossg...@gmail.com wrote:
  Dear All,
 
  I am trying to integrate Openstack + vCenter + Neutron + VMware dvSwitch
 ML2
  Mechanism driver.
 
  I deployed a two node openstack environment (controller + compute with
 KVM)
  with Neutron VLAN + KVM using fuel 5.1. Again I installed nova-compute
 using
  yum in controller node and configured nova-compute in controller to point
  vCenter. I am also using Neutron VLAN with VMware dvSwitch ML2 Mechanism
  driver. My vCenter is properly configured as suggested by the doc:
 
 https://www.mirantis.com/blog/managing-vmware-vcenter-resources-mirantis-openstack-5-0-part-1-create-vsphere-cluster/
 
  I am able to create network from Horizon and I can see the same network
  created in vCenter. When I try to create a VM I am getting the below
 error
  in Horizon.
 
  Error: Failed to launch instance test-01: Please try again later
 [Error:
  No valid host was found. ].
 
  Here is the error message from Instance Overview tab:
 
  Instance Overview
  Info
  Name
  test-01
  ID
  309a1f47-83b6-4ab4-9d71-642a2000c8a1
  Status
  Error
  Availability Zone
  nova
  Created
  Jan. 9, 2015, 8:16 p.m.
  Uptime
  0 minutes
  Fault
  Message
  No valid host was found.
  Code
  500
  Details
  File
 /usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py,
  line 108, in schedule_run_instance raise exception.NoValidHost(reason=)
  Created
  Jan. 9, 2015, 8:16 p.m
 
  Getting the below error in nova-all.log:
 
 
  183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.135 31870 DEBUG
  keystoneclient.middleware.auth_token
  [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Authenticating user token
  __call__
 
 /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:676
  183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.136 31870 DEBUG
  keystoneclient.middleware.auth_token
  [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Removing headers from request
  environment:
 
 X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
  _remove_auth_headers
 
 /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:733
  183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.137 31870 DEBUG
  keystoneclient.middleware.auth_token
  [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Returning cached token
  _cache_get
 
 /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1545
  183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.138 31870 DEBUG
  keystoneclient.middleware.auth_token
  [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Storing token in cache store
 
 /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1460
  183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.139 31870 DEBUG
  keystoneclient.middleware.auth_token
  [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Received request from user:
  4564fea80fa14e1daed160afa074d389 with project_id :
  dd32714d9009495bb51276e284380d6a and roles: admin,_member_
  _build_user_headers
 
 /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:996
  183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.141 31870 DEBUG
  routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Matched GET
 
 /dd32714d9009495bb51276e284380d6a/servers/309a1f47-83b6-4ab4-9d71-642a2000c8a1
  __call__ /usr/lib/python2.6/site-packages/routes/middleware.py:100
  183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.142 31870 DEBUG
  routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Route path:
  '/{project_id}/servers/:(id)', defaults: {'action': u'show',
 'controller':
  nova.api.openstack.wsgi.Resource object at 0x43e2550} __call__
  /usr/lib/python2.6/site-packages/routes/middleware.py:102
  183Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.142 31870 DEBUG
  routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Match dict:
  {'action': u'show', 'controller': nova.api.openstack.wsgi.Resource
 object
  at 0x43e2550, 'project_id': 

Re: [openstack-dev] [heat] Remove deprecation properties

2015-01-18 Thread Angus Salkeld
On Sat, Jan 17, 2015 at 5:01 AM, Georgy Okrokvertskhov 
gokrokvertsk...@mirantis.com wrote:

 Hi,

 Murano uses Heat templates with almost all available resources. Neutron
 resources are definitely used.
 I think Murano can update our Heat resources handling properly, but there
 are at least two scenarios which should be considered:
 1) Murano generated stacks are long lasting. Murano uses stack update to
 modify stacks so it is expected that stack update process is not affected
 by Heat upgrade and resource schema deprecation.
 2) Murano uses application packages which contain HOT snippets.
 Application authors heavily rely on backward compatibility so that
 applications written on Icehouse version should work on later OpenStack
 versions. If it is not the case there should be some mechanism to
 automatically translate old resource schema to a new one.

 I hope all the changes will be documented somewhere. I think it will be
 good to have a wiki page with a list of schema versions and changes. This
 will help Heat users to modify their templates accordingly.

 Another potential issue I see is the fact that it is quite often when
 multiple versions of OpenStack are used in data centers. Like previous
 version in production and new versions of OpenStack are in stage and Dev
 environment which are used to prepare for production upgrade and current
 development. If these different versions of OpenStack will require
 different version of Heat templates it might be a problem as instead of
 upgrading just infrastructure services one will need to synchronously
 upgrade different external components which rely on Heat templates.


Thank Georgy,

We will tread carefully here. Once we add a property, I don't see how we
can ever totally remove support for it.

-Angus



 Thanks
 Georgy


 On Fri, Jan 16, 2015 at 5:10 AM, Sergey Kraynev skray...@mirantis.com
 wrote:

 Steve, Thanks for the feedback.

 On 16 January 2015 at 15:09, Steven Hardy sha...@redhat.com wrote:

 On Thu, Dec 25, 2014 at 01:52:43PM +0400, Sergey Kraynev wrote:
 Hi all.
 In the last time we got on review several patches, which removes old
 deprecation properties [1],A
 and one mine [2].
 The aim is to delete deprecated code and redundant tests. It looks
 simple,
 but the main problem, which we met, is backward compatibility.
 F.e. user has created resource (FIP) with old property schema, i.e.
 using
 SUBNET_ID instead of SUBNET. On first look nothing bad will not
 happen,
 because:

 FWIW I think it's too soon to remove the Neutron subnet_id/network_id
 properties, they were only deprecated in Juno [1], and it's evident that
 users are still using them on icehouse [2]

 I thought the normal deprecation cycle was at least two releases, but I
 can't recall where I read that.  Considering the overhead of maintaining
 these is small, I'd favour leaning towards giving more time for users to
 update their templates, rather then breaking them via very aggresive
 removal of deprecated interfaces.


 Honestly I thought, that we use 1 release cycle, but I have not any
 objections to do it after two releases.
 Will  be glad to know what is desired deprecation period.



 I'd suggest some or all of the following:

 - Add a planned for removal in $release to the SupportStatus string
   associated with the deprecation, so we can document the planned
 removal.
 - Wait for at least two releases between deprecation and removal, and
   announce the interfaces which will be removed in the release notes for
   the release before removal e.g:
 - Deprecated in Juno
 - Announce planned removal in Kilo release notes
 - Remove in J


 I like this idea, IMO it will make our deprecation process clear.





 [1] https://review.openstack.org/#/c/82853/
 [2]
 http://lists.openstack.org/pipermail/openstack/2015-January/011156.html

 1. handle_delete use resource_id and any changes in property schema
 does
 not affect other actions.
 2. If user want to use old template, he will get adequate error
 message,
 that this property is not presented in schema. After that he just
 should
 switch to new property and update stack using this new property.
 In the same time we have one big issues for shadow dependencies,
 which is
 actual for neutron resources. The simplest approach will not be
 worked
 [3], due to old properties was deleted from property_schema.
 Why is it bad ?
 - we will get again all bugs related with such dependencies.
 - how to make sure:A
 A  A  - create stack with old property (my template [4])
 A  A  - open horizon, and look on topology
 A  A  - download patch [2] and restart engine
 A  A  - reload horizon page with topology
 A  A  - as result it will be different
 A
 I have some ideas about how to solve this, but none of them is not
 enough
 good for me:
 A - getting such information from self.properties.data is bad,
 because we
 

Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2015-01-18 Thread Eduard Matei
Hi Ramy, indeed user zuul could not read the event-stream (permission
denied).
The question is then how it could start zuul-server and read some events?
Anyway, i copied over .ssh from user jenkins, and now user zuul can run
that command.
I restarted zuul-server and will keep an eye on it.

Thanks,
Eduard

On Fri, Jan 16, 2015 at 8:32 PM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Hi Eduard,



 Looking at the zuul code, it seems that is just a periodic task:
 https://github.com/openstack-infra/zuul/blob/master/zuul/launcher/gearman.py#L50



 So the issue is not likely those log messages, but rather the lack of
 other log messages.

 It seems somehow zuul lost its connection to gerrit even stream…those are
 the obvious log messages that are missing.

 And without that, no jobs will trigger a run, so I’d look there.



 Zuul Manual is here: http://ci.openstack.org/zuul/

 Zuul conf files is documented here:
 http://ci.openstack.org/zuul/zuul.html#zuul-conf

 And the gerrit configurations are here:
 http://ci.openstack.org/zuul/zuul.html#gerrit



 Double check you can manually read the event stream as the zuul user (sudo
 su - zuul) using those settings and this step:

 http://ci.openstack.org/third_party.html#reading-the-event-stream



 Ramy









 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Friday, January 16, 2015 6:57 AM

 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Hi Punith,

 That's the whole log :) Not a lot happening after restart, just default
 initialization.

 Zuul-merger is not restarted.

 Layout.yaml is default.

 Gearman plugin tested in Jenkins, reports success.



 I disabled now the restart job to see how long it will Look for lost
 builds.



 Have a nice weekend,



 Eduard



 On Fri, Jan 16, 2015 at 1:15 PM, Punith S punit...@cloudbyte.com wrote:

   Hi eduard,



 can you post the whole zuul.log or debug.log after the zuul and
 zuul-merger restart along with your layout.yaml

 did you test the connection of gearman pulgin in jenkins ?



 thanks



 On Fri, Jan 16, 2015 at 4:20 PM, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

  Hi Ramy,

 Still couldn't get my custom code to execute between installing devstack
 and starting tests... i'll try with some custom scripts and skip devstack-*
 scripts.



 Meanwhile i see another issue:

 2015-01-16 11:02:26,283 DEBUG zuul.IndependentPipelineManager: Finished
 queue processor: patch (changed: False)

 2015-01-16 11:02:26,283 DEBUG zuul.Scheduler: Run handler sleeping

 2015-01-16 11:06:06,873 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:11:06,873 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:16:06,874 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:21:06,874 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:26:06,875 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:31:06,875 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:36:06,876 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:41:06,876 DEBUG zuul.Gearman: Looking for lost builds

 2015-01-16 11:46:06,877 DEBUG zuul.Gearman: Looking for lost builds



 Zuul is stuck in Looking for lost builds and it misses comments so it
 doesn't trigger jobs on patches.

 Any idea how to fix this? (other than restart it every 30 mins, in which
 case it misses the results of running jobs so it doesn't post the results).



 Thanks,

 Eduard



 On Fri, Jan 16, 2015 at 1:43 AM, Asselin, Ramy ramy.asse...@hp.com
 wrote:

  Hi Eduard,



 Glad you’re making progress.



 $BASE/new/devstack/ is available at the time pre_test_hook is called, so
 you should be able to make all the changes you need there.



 The sample shows how to configure the driver using local.conf devstack
 hooks.

 See here for more details: [1] [2]



 Regarding test, you can do both.

 Cinder requires you run tempest.api.volume[3]



 And you can setup a 2nd job that runs your internal functional tests as
 well.



 Ramy



 [1] http://docs.openstack.org/developer/devstack/configuration.html

 [2] http://docs.openstack.org/developer/devstack/plugins.html

 [3]
 https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Third_Party_CI_Requirements











 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Thursday, January 15, 2015 4:57 AM


 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Hi Ramy,



 The issue with disconnect/abort no longer happens, so i guess it was some
 issues with networking.



 Regarding the ssh keys i finally used Jenkins Configuration Provider
 Plugin to inject ssh keys as a pre-build step, then i added a manual
 execution step to scp the logs to the server, so now everything appears to
 be working.



 Now for the REAL tests:

 

Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread gordon chung
+1

cheers,
gord


From: morgan.fainb...@gmail.com
Date: Sun, 18 Jan 2015 12:11:02 -0700
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec 
core

Hello all,
I would like to nominate Brad Topol for Keystone Spec core (core reviewer for 
Keystone specifications and API-Specification only: 
https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been a 
consistent voice advocating for well defined specifications, use of existing 
standards/technology, and ensuring the UX of all projects under the Keystone 
umbrella continue to improve. Brad brings to the table a significant amount of 
insight to the needs of the many types and sizes of OpenStack deployments, 
especially what real-world customers are demanding when integrating with the 
services. Brad is a core contributor on pycadf (also under the Keystone 
umbrella) and has consistently contributed code and reviews to the Keystone 
projects since the Grizzly release.
Please vote with +1/-1 on adding Brad to as core to the Keystone Spec repo. 
Voting will remain open until Friday Jan 23.
Cheers,Morgan Fainberg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] All rights reserved V.S. Apache license

2015-01-18 Thread ZhiQiang Fan
@Stefano Maffulli

Yes, the main point is the conflict of reserved all, and abandon some
(actually most).

According to the order the last will take effect IIUC Monty Taylor's
explaination.

I'm thinking that we should remove the all rights reserved words if we're
using Apache license.
Misleading is not a good thing, especially when it is for legal issue.

On Mon, Jan 19, 2015 at 6:53 AM, Stefano Maffulli stef...@openstack.org
wrote:

 On Sat, 2015-01-17 at 16:07 -0500, Monty Taylor wrote:
  It's actually a set of words that is no longer necessary as of the year
  2000. It's not communicating anything about a granted license, which is
  what the Apache License does - it's actually just asserting that the
  original copyright holder asserts that they have not waived any of their
  rights as a copyright holder. However, the Berne convention grants this
  automatically without a positive assertion.

 I think ZhiQiang Fan's question is about the sentence all rights
 reserved followed by the implicit some rights not reserved granted by
 the Apache license, rather than the meaning of 'all rights reserved'
 alone. You're right that such sentence by itself is meaningless but in
 the context of the Apache license I think it's confusing at best,
 probably wrong.

 I don't remember seeing this case discussed on legal-discuss and I'm
 quite sure that the right way to apply the Apache license to source code
 is *not* by saying (C) `date +%Y` Foo Corp, All Rights Reserved
 followed by Apache license (see appendix on
 http://www.apache.org/licenses/LICENSE-2.0)

 Maybe a passage on legal-discuss would be better?

 /stef


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network just
like nova-docker for this?
2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?

Thanks!

2015-01-18 8:51 GMT+08:00 Steven Dake sd...@redhat.com:

 Hi folks and especially Magnum Core,

 Magnum Milestone #1 should released early this coming week.  I wanted to
 kick off discussions around milestone #2 since Milestone #1 development is
 mostly wrapped up.

 The milestone #2 blueprints:
 https://blueprints.launchpad.net/magnum/milestone-2

 The overall goal of Milestone #1 was to make Magnum usable for
 developers.  The overall goal of Milestone #2 is to make Magnum usable by
 operators and their customers.  To do this we are implementing blueprints
 like multi-tenant, horizontal-scale, and the introduction of coreOS in
 addition to Fedora Atomic as a Container uOS.  We are also plan to
 introduce some updates to allow bays to be more scalable.  We want bays to
 scale to more nodes manually (short term), as well as automatically (longer
 term).  Finally we want to tidy up some of the nit-picky things about
 Magnum that none of the core developers really like at the moment.  One
 example is the magnum-bay-status blueprint which will prevent the creation
 of pods/services/replicationcontrollers until a bay has completed
 orchestration via Heat.  Our final significant blueprint for milestone #2
 is the ability to launch our supported uOS on bare metal using Nova's
 Ironic plugin and the baremetal flavor.  As always, we want to improve our
 unit testing from what is now 70% to ~80% in the next milestone.

 Please have a look at the blueprints and feel free to comment on this
 thread or in the blueprints directly.  If you would like to see different
 blueprints tackled during milestone #2 that feedback is welcome, or if you
 think the core team[1] is on the right track, we welcome positive kudos too.

 If you would like to see what we tackled in Milestone #1, the code should
 be tagged and ready to run Tuesday January 20th.  Master should work well
 enough now, and the developer quickstart guide is mostly correct.

 The Milestone #1 bluerpints are here for comparison sake:
 https://blueprints.launchpad.net/magnum/milestone-1

 Regards,
 -steve


 [1] https://review.openstack.org/#/admin/groups/473,members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][qa][Rally] Thoughts on removing half of Rally benchmark scenarios

2015-01-18 Thread Jay Pipes

On 01/17/2015 12:47 PM, Boris Pavlovic wrote:

Hi stackers,

I have an idea about removing almost half of rally scenarios and keep
all functionality.

Currently you can see a lot of similar benchmarks like:

NovaServers.boot_server  # boot server with passed
arguments
NovaServers.boot_and_delete_server  # boot server with passed arguments
and delete

The reason of having this 2 benchmarks are various purpose of them:

1) Nova.boot_server is used for *volume/scale testing*.
Where we would like to see how N active VM works and affects OpenStack
API and booting next VMs.

2) Nova.boot_and_delete_server is used for *performance/load* testing.
We are interested how booting and deleting VM perform in case on various
load (what is different in duration of booting 1 VM when we have 1, 2, M
simultaneously VM boot actions)


*The idea is to keep only 1 boot_server and add arguments do_delete
with by default False. *
*
*
It means that:

  # this is equal to old Nova.boot_server
NovaServers.boot_server: [{args: {...} }]

# this is equal to old Nova.boot_and_delete_server
NovaServers.boot_server: [{args: {..., do_delete: True}]


++ That would be a nice improvement, thanks Boris!

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Steven Dake

On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the 
network? The current native docker support did not have any logic for 
network management, are we going to leverage neutron or nova-network 
just like nova-docker for this?
We can just use flannel for both these use cases.  One way to approach 
using flannel is that we can expect docker networks will always be setup 
the same way, connecting into a flannel network.


2) For k8s, swarm, we can leverage the scheduler in those container 
management tools, but what about docker native support? How to handle 
resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay has 
more then one node.  I keep hoping someone in the community will propose 
something that doesn't introduce an agent dependency in the OS.


Regards
-steve


Thanks!

2015-01-18 8:51 GMT+08:00 Steven Dake sd...@redhat.com 
mailto:sd...@redhat.com:


Hi folks and especially Magnum Core,

Magnum Milestone #1 should released early this coming week. I
wanted to kick off discussions around milestone #2 since Milestone
#1 development is mostly wrapped up.

The milestone #2 blueprints:
https://blueprints.launchpad.net/magnum/milestone-2

The overall goal of Milestone #1 was to make Magnum usable for
developers.  The overall goal of Milestone #2 is to make Magnum
usable by operators and their customers.  To do this we are
implementing blueprints like multi-tenant, horizontal-scale, and
the introduction of coreOS in addition to Fedora Atomic as a
Container uOS.  We are also plan to introduce some updates to
allow bays to be more scalable. We want bays to scale to more
nodes manually (short term), as well as automatically (longer
term).  Finally we want to tidy up some of the nit-picky things
about Magnum that none of the core developers really like at the
moment.  One example is the magnum-bay-status blueprint which will
prevent the creation of pods/services/replicationcontrollers until
a bay has completed orchestration via Heat.  Our final significant
blueprint for milestone #2 is the ability to launch our supported
uOS on bare metal using Nova's Ironic plugin and the baremetal
flavor.  As always, we want to improve our unit testing from what
is now 70% to ~80% in the next milestone.

Please have a look at the blueprints and feel free to comment on
this thread or in the blueprints directly.  If you would like to
see different blueprints tackled during milestone #2 that feedback
is welcome, or if you think the core team[1] is on the right
track, we welcome positive kudos too.

If you would like to see what we tackled in Milestone #1, the code
should be tagged and ready to run Tuesday January 20th.  Master
should work well enough now, and the developer quickstart guide is
mostly correct.

The Milestone #1 bluerpints are here for comparison sake:
https://blueprints.launchpad.net/magnum/milestone-1

Regards,
-steve


[1] https://review.openstack.org/#/admin/groups/473,members

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread Morgan Fainberg
Hello all,

I would like to nominate Brad Topol for Keystone Spec core (core reviewer for 
Keystone specifications and API-Specification only: 
https://git.openstack.org/cgit/openstack/keystone-specs 
https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been a 
consistent voice advocating for well defined specifications, use of existing 
standards/technology, and ensuring the UX of all projects under the Keystone 
umbrella continue to improve. Brad brings to the table a significant amount of 
insight to the needs of the many types and sizes of OpenStack deployments, 
especially what real-world customers are demanding when integrating with the 
services. Brad is a core contributor on pycadf (also under the Keystone 
umbrella) and has consistently contributed code and reviews to the Keystone 
projects since the Grizzly release.

Please vote with +1/-1 on adding Brad to as core to the Keystone Spec repo. 
Voting will remain open until Friday Jan 23.

Cheers,
Morgan Fainberg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread Marek Denis

+1

On 18.01.2015 20:11, Morgan Fainberg wrote:

Hello all,

I would like to nominate Brad Topol for Keystone Spec core (core 
reviewer for Keystone specifications and API-Specification only: 
https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has 
been a consistent voice advocating for well defined specifications, 
use of existing standards/technology, and ensuring the UX of all 
projects under the Keystone umbrella continue to improve. Brad brings 
to the table a significant amount of insight to the needs of the many 
types and sizes of OpenStack deployments, especially what real-world 
customers are demanding when integrating with the services. Brad is a 
core contributor on pycadf (also under the Keystone umbrella) and has 
consistently contributed code and reviews to the Keystone projects 
since the Grizzly release.


Please vote with +1/-1 on adding Brad to as core to the Keystone Spec 
repo. Voting will remain open until Friday Jan 23.


Cheers,
Morgan Fainberg




--
Marek Denis
[marek.de...@cern.ch]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread Lance Bragstad
+1
On Jan 18, 2015 1:23 PM, Marek Denis marek.de...@cern.ch wrote:

  +1

 On 18.01.2015 20:11, Morgan Fainberg wrote:

 Hello all,

  I would like to nominate Brad Topol for Keystone Spec core (core
 reviewer for Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been
 a consistent voice advocating for well defined specifications, use of
 existing standards/technology, and ensuring the UX of all projects under
 the Keystone umbrella continue to improve. Brad brings to the table a
 significant amount of insight to the needs of the many types and sizes of
 OpenStack deployments, especially what real-world customers are demanding
 when integrating with the services. Brad is a core contributor on pycadf
 (also under the Keystone umbrella) and has consistently contributed code
 and reviews to the Keystone projects since the Grizzly release.

  Please vote with +1/-1 on adding Brad to as core to the Keystone Spec
 repo. Voting will remain open until Friday Jan 23.

  Cheers,
 Morgan Fainberg



 --
 Marek Denis
 [marek.de...@cern.ch]


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread Raildo Mascena
+1

Em dom, 18 de jan de 2015 16:25, Marek Denis marek.de...@cern.ch escreveu:

  +1


 On 18.01.2015 20:11, Morgan Fainberg wrote:

 Hello all,

  I would like to nominate Brad Topol for Keystone Spec core (core
 reviewer for Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been
 a consistent voice advocating for well defined specifications, use of
 existing standards/technology, and ensuring the UX of all projects under
 the Keystone umbrella continue to improve. Brad brings to the table a
 significant amount of insight to the needs of the many types and sizes of
 OpenStack deployments, especially what real-world customers are demanding
 when integrating with the services. Brad is a core contributor on pycadf
 (also under the Keystone umbrella) and has consistently contributed code
 and reviews to the Keystone projects since the Grizzly release.

  Please vote with +1/-1 on adding Brad to as core to the Keystone Spec
 repo. Voting will remain open until Friday Jan 23.

  Cheers,
 Morgan Fainberg



 --
 Marek Denis
 [marek.de...@cern.ch]

  
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Thanks Steven, more questions/comments in line.

2015-01-19 0:11 GMT+08:00 Steven Dake sd...@redhat.com:

  On 01/18/2015 06:39 AM, Jay Lau wrote:

   Thanks Steven, just some questions/comments here:

  1) For native docker support, do we have some project to handle the
 network? The current native docker support did not have any logic for
 network management, are we going to leverage neutron or nova-network just
 like nova-docker for this?

 We can just use flannel for both these use cases.  One way to approach
 using flannel is that we can expect docker networks will always be setup
 the same way, connecting into a flannel network.

What about introducing neutron/nova-network support for native docker
container just like nova-docker?


  2) For k8s, swarm, we can leverage the scheduler in those container
 management tools, but what about docker native support? How to handle
 resource scheduling for native docker containers?

   I am not clear on how to handle native Docker scheduling if a bay has
 more then one node.  I keep hoping someone in the community will propose
 something that doesn't introduce an agent dependency in the OS.

My thinking is as this: Add a new scheduler just like what nova/cinder is
doing now and then we can migrate to gantt once it become mature, comments?


 Regards
 -steve


  Thanks!

 2015-01-18 8:51 GMT+08:00 Steven Dake sd...@redhat.com:

 Hi folks and especially Magnum Core,

 Magnum Milestone #1 should released early this coming week.  I wanted to
 kick off discussions around milestone #2 since Milestone #1 development is
 mostly wrapped up.

 The milestone #2 blueprints:
 https://blueprints.launchpad.net/magnum/milestone-2

 The overall goal of Milestone #1 was to make Magnum usable for
 developers.  The overall goal of Milestone #2 is to make Magnum usable by
 operators and their customers.  To do this we are implementing blueprints
 like multi-tenant, horizontal-scale, and the introduction of coreOS in
 addition to Fedora Atomic as a Container uOS.  We are also plan to
 introduce some updates to allow bays to be more scalable.  We want bays to
 scale to more nodes manually (short term), as well as automatically (longer
 term).  Finally we want to tidy up some of the nit-picky things about
 Magnum that none of the core developers really like at the moment.  One
 example is the magnum-bay-status blueprint which will prevent the creation
 of pods/services/replicationcontrollers until a bay has completed
 orchestration via Heat.  Our final significant blueprint for milestone #2
 is the ability to launch our supported uOS on bare metal using Nova's
 Ironic plugin and the baremetal flavor.  As always, we want to improve our
 unit testing from what is now 70% to ~80% in the next milestone.

 Please have a look at the blueprints and feel free to comment on this
 thread or in the blueprints directly.  If you would like to see different
 blueprints tackled during milestone #2 that feedback is welcome, or if you
 think the core team[1] is on the right track, we welcome positive kudos too.

 If you would like to see what we tackled in Milestone #1, the code should
 be tagged and ready to run Tuesday January 20th.  Master should work well
 enough now, and the developer quickstart guide is mostly correct.

 The Milestone #1 bluerpints are here for comparison sake:
 https://blueprints.launchpad.net/magnum/milestone-1

 Regards,
 -steve


 [1] https://review.openstack.org/#/admin/groups/473,members

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --
   Thanks,

  Jay Lau (Guangya Liu)


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-operators][qa][Rally] Thoughts on removing half of Rally benchmark scenarios

2015-01-18 Thread Mikhail Dubov
Hi Boris,

I understand your concern about keeping the number of different benchmark
scenarios in Rally not too big so that users don't get confused. But what I
really like now about benchmark scenario names in Rally is that they are
highly declarative, i.e. you read them and you have a clear idea of what's
going on inside those scenarios. You see boot_and_delete_server = you
know that Rally will boot and then delete a server, boot_server = only
boot a server.

That's very convenient e.g. when you navigate through Rally report pages:
you see the scenario names in the left panel and you know what to expect
from their results. It seems to me that, if we merge scenarios like
boot_server
and boot_and_delete_server together, we will lose a bit in clarity.

Besides, as you pointed out, Nova.boot_server and
Nova.boot_and_delete_server
are used for two different purposes - seems to be indeed a strong reason
for keeping them separated.

Best regards,
Mikhail Dubov

Engineering OPS
Mirantis, Inc.
E-Mail: mdu...@mirantis.com
Skype: msdubov

On Sat, Jan 17, 2015 at 8:47 PM, Boris Pavlovic bo...@pavlovic.me wrote:

 Hi stackers,

 I have an idea about removing almost half of rally scenarios and keep all
 functionality.

 Currently you can see a lot of similar benchmarks like:

 NovaServers.boot_server  # boot server with passed
 arguments
 NovaServers.boot_and_delete_server  # boot server with passed arguments
 and delete

 The reason of having this 2 benchmarks are various purpose of them:

 1) Nova.boot_server is used for *volume/scale testing*.
 Where we would like to see how N active VM works and affects OpenStack API
 and booting next VMs.

 2) Nova.boot_and_delete_server is used for *performance/load* testing.
 We are interested how booting and deleting VM perform in case on various
 load (what is different in duration of booting 1 VM when we have 1, 2, M
 simultaneously VM boot actions)


 *The idea is to keep only 1 boot_server and add arguments do_delete with
 by default False. *

 It means that:

  # this is equal to old Nova.boot_server
 NovaServers.boot_server: [{args: {...} }]

 # this is equal to old Nova.boot_and_delete_server
 NovaServers.boot_server: [{args: {..., do_delete: True}]


 Thoughts?


 Best regards,
 Boris Pavlovic

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Steven Dake

On 01/18/2015 09:23 AM, Jay Lau wrote:

Thanks Steven, more questions/comments in line.

2015-01-19 0:11 GMT+08:00 Steven Dake sd...@redhat.com 
mailto:sd...@redhat.com:


On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle
the network? The current native docker support did not have any
logic for network management, are we going to leverage neutron or
nova-network just like nova-docker for this?

We can just use flannel for both these use cases.  One way to
approach using flannel is that we can expect docker networks will
always be setup the same way, connecting into a flannel network.

What about introducing neutron/nova-network support for native docker 
container just like nova-docker?





Does that mean introducing an agent on the uOS?  I'd rather not have 
agents, since all of these uOS systems have wonky filesystem layouts and 
there is not an easy way to customize them, with dib for example.



2) For k8s, swarm, we can leverage the scheduler in those
container management tools, but what about docker native support?
How to handle resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay
has more then one node.  I keep hoping someone in the community
will propose something that doesn't introduce an agent dependency
in the OS.

My thinking is as this: Add a new scheduler just like what nova/cinder 
is doing now and then we can migrate to gantt once it become mature, 
comments?


Cool that WFM.  Too bad we can't just use gantt out the gate.

Regards
-steve



Regards
-steve



Thanks!

2015-01-18 8:51 GMT+08:00 Steven Dake sd...@redhat.com
mailto:sd...@redhat.com:

Hi folks and especially Magnum Core,

Magnum Milestone #1 should released early this coming week. 
I wanted to kick off discussions around milestone #2 since

Milestone #1 development is mostly wrapped up.

The milestone #2 blueprints:
https://blueprints.launchpad.net/magnum/milestone-2

The overall goal of Milestone #1 was to make Magnum usable
for developers.  The overall goal of Milestone #2 is to make
Magnum usable by operators and their customers.  To do this
we are implementing blueprints like multi-tenant,
horizontal-scale, and the introduction of coreOS in addition
to Fedora Atomic as a Container uOS.  We are also plan to
introduce some updates to allow bays to be more scalable.  We
want bays to scale to more nodes manually (short term), as
well as automatically (longer term).  Finally we want to tidy
up some of the nit-picky things about Magnum that none of the
core developers really like at the moment.  One example is
the magnum-bay-status blueprint which will prevent the
creation of pods/services/replicationcontrollers until a bay
has completed orchestration via Heat. Our final significant
blueprint for milestone #2 is the ability to launch our
supported uOS on bare metal using Nova's Ironic plugin and
the baremetal flavor.  As always, we want to improve our unit
testing from what is now 70% to ~80% in the next milestone.

Please have a look at the blueprints and feel free to comment
on this thread or in the blueprints directly.  If you would
like to see different blueprints tackled during milestone #2
that feedback is welcome, or if you think the core team[1] is
on the right track, we welcome positive kudos too.

If you would like to see what we tackled in Milestone #1, the
code should be tagged and ready to run Tuesday January 20th. 
Master should work well enough now, and the developer

quickstart guide is mostly correct.

The Milestone #1 bluerpints are here for comparison sake:
https://blueprints.launchpad.net/magnum/milestone-1

Regards,
-steve


[1] https://review.openstack.org/#/admin/groups/473,members


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,


Jay Lau (Guangya Liu)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread Jamie Lennox
+1

- Original Message -
 From: Morgan Fainberg morgan.fainb...@gmail.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, 19 January, 2015 5:11:02 AM
 Subject: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec   
 core
 
 Hello all,
 
 I would like to nominate Brad Topol for Keystone Spec core (core reviewer for
 Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been a
 consistent voice advocating for well defined specifications, use of existing
 standards/technology, and ensuring the UX of all projects under the Keystone
 umbrella continue to improve. Brad brings to the table a significant amount
 of insight to the needs of the many types and sizes of OpenStack
 deployments, especially what real-world customers are demanding when
 integrating with the services. Brad is a core contributor on pycadf (also
 under the Keystone umbrella) and has consistently contributed code and
 reviews to the Keystone projects since the Grizzly release.
 
 Please vote with +1/-1 on adding Brad to as core to the Keystone Spec repo.
 Voting will remain open until Friday Jan 23.
 
 Cheers,
 Morgan Fainberg
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Notification Schemas ...

2015-01-18 Thread Sandy Walsh
Hey y'all

Eddie Sheffield has pulled together a strawman set of notification schemas for 
Nova and Glance. Looks like a great start for further discussion. He's going to 
add JSON-Schema validation next as a form of unit test. Then I guess we have to 
start thinking about a library to digest these and help us build validated 
notifications. 

Please have a peek. Bend, spindle, mutilate as required. Pull requests welcome. 

(we also have to figure out where it's going to live, this is just a parking 
spot)

https://github.com/StackTach/notification-schemas

-S

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] All rights reserved V.S. Apache license

2015-01-18 Thread Stefano Maffulli
On Sat, 2015-01-17 at 16:07 -0500, Monty Taylor wrote:
 It's actually a set of words that is no longer necessary as of the year
 2000. It's not communicating anything about a granted license, which is
 what the Apache License does - it's actually just asserting that the
 original copyright holder asserts that they have not waived any of their
 rights as a copyright holder. However, the Berne convention grants this
 automatically without a positive assertion.

I think ZhiQiang Fan's question is about the sentence all rights
reserved followed by the implicit some rights not reserved granted by
the Apache license, rather than the meaning of 'all rights reserved'
alone. You're right that such sentence by itself is meaningless but in
the context of the Apache license I think it's confusing at best,
probably wrong.

I don't remember seeing this case discussed on legal-discuss and I'm
quite sure that the right way to apply the Apache license to source code
is *not* by saying (C) `date +%Y` Foo Corp, All Rights Reserved
followed by Apache license (see appendix on
http://www.apache.org/licenses/LICENSE-2.0)

Maybe a passage on legal-discuss would be better?

/stef


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-18 Thread David Stanek
+1

On Sun, Jan 18, 2015 at 2:11 PM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Hello all,

 I would like to nominate Brad Topol for Keystone Spec core (core reviewer
 for Keystone specifications and API-Specification only:
 https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been
 a consistent voice advocating for well defined specifications, use of
 existing standards/technology, and ensuring the UX of all projects under
 the Keystone umbrella continue to improve. Brad brings to the table a
 significant amount of insight to the needs of the many types and sizes of
 OpenStack deployments, especially what real-world customers are demanding
 when integrating with the services. Brad is a core contributor on pycadf
 (also under the Keystone umbrella) and has consistently contributed code
 and reviews to the Keystone projects since the Grizzly release.

 Please vote with +1/-1 on adding Brad to as core to the Keystone Spec
 repo. Voting will remain open until Friday Jan 23.

 Cheers,
 Morgan Fainberg


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev