Re: [openstack-dev] [nova] Should be instance_dir in all nova compute node same ?

2016-05-02 Thread taget

hi Timofei

I don't have any specific use case, this issue is found while I am doing 
live migration testing, what I did is:


1. create 2 compute node,
2. create a nfs service on one of compute node, let's say node-2 and 
expose /opt/stack/date/nova/instances

3. mount node-2:/opt/stack/date/nova/instances to node-1 's /mnt
4. specify /mnt as node-1's instance_dir so node-1 and node-2 have 
shared instance_dir(with different path)

5. live migration an instance and fail.


Thanks
Eli.

On 2016年04月29日 19:52, Timofei Durakov wrote:

Hi,

From the first sight there are no restrictions for instance_path 
option. Could you please provide some details about the use case?
I wonder would this functionality be really useful for 
cloud-operators, or we just should add description to instance path 
option, forcing to use the same path on compute nodes?


Timofey



On Fri, Apr 29, 2016 at 4:47 AM, Eli Qiao > wrote:


hi team,

Is there any require that all compute node's instance_dir should
be same?

I recently get an issue while doing live migration with
migrateToURI3 method of libvirt python interface, if
source and dest host are configured with difference instance_dir,
migration will fail since migrateToURI3
requires a new xml, but we don't modified the instance_dir of dest
xml.

I reported a bug on [1]_ , would like to get confirmation before
spend effort working on it.

[1] https://bugs.launchpad.net/nova/+bug/1576245

Thanks.

-- 


Best Regards, Eli Qiao (乔立勇)
Intel OTC China


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best Regards, Eli Qiao (乔立勇)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-02 Thread Steve Martinelli
Comments inline...

On Mon, May 2, 2016 at 7:39 PM, Matt Fischer  wrote:

> On Mon, May 2, 2016 at 5:26 PM, Clint Byrum  wrote:
>
>> Hello! I enjoyed very much listening in on the default token provider
>> work session last week in Austin, so thanks everyone for participating
>> in that. I did not speak up then, because I wasn't really sure of this
>> idea that has been bouncing around in my head, but now I think it's the
>> case and we should consider this.
>>
>> Right now, Keystones without fernet keys, are issuing UUID tokens. These
>> tokens will be in the database, and valid, for however long the token
>> TTL is.
>>
>> The moment that one changes the configuration, keystone will start
>> rejecting these tokens. This will cause disruption, and I don't think
>> that is fair to the users who will likely be shown new bugs in their
>> code at a very unexpected moment.
>>
>
> This will reduce the interruption and will also as you said possibly catch
> bugs. We had bugs in some custom python code that didn't get a new token
> when the keystone server returned certain code, but we found all those in
> our dev environment.
>
> From an operational POV, I can't imagine that any operators will go to
> work one day and find out that they have a new token provider because of a
> new default. Wouldn't the settings in keystone.conf be under some kind of
> config management? I don't know what distros do with new defaults however,
> maybe that would be the surprise?
>

With respect to upgrades, assuming we default to Fernet tokens in the
Newton release, it's only an issue if the the deployer has no token format
specified (since it defaulted to UUID pre-Newton), and relied on the
default after the upgrade (since it'll switches to Fernet in Newton).

I'm glad Matt outlines his reasoning above since that is nearly exactly
what Jesse Keating said at the Fernet token work session we had in Austin.
The straw man we come up with of a deployer that just upgrades without
checking then config files is just that, a straw man. Upgrades are well
planned and thought out before being performed. None of the operators in
the room saw this as an issue. We opened a bug to prevent keystone from
starting if fernet setup had not been run, and Fernet is the
selected/defaulted token provider option:
https://bugs.launchpad.net/keystone/+bug/1576315

For all new installations, deploying your cloud will now have two extra
steps, running "keystone-manage fernet_setup" and "keystone-manage
fernet_rotate". We will update the install guide docs accordingly.

With all that said, we do intend to default to Fernet tokens for the Newton
release.


>
>
>>
>> I wonder if one could merge UUID and Fernet into a provider which
>> handles this transition gracefully:
>>
>> if self._fernet_keys:
>>   return self._issue_fernet_token()
>> else:
>>   return self._issue_uuid_token()
>>
>> And in the validation, do the same, but also with an eye toward keeping
>> the UUID tokens alive:
>>
>> if self._fernet_keys:
>>   try:
>> self._validate_fernet_token()
>>   except InvalidFernetFormatting:
>> self._validate_uuid_token()
>> else:
>>   self._validate_uuid_token()
>>
>
This just seems sneaky/wrong to me. I'd rather see a failure here than
switch token formats on the fly.

So that while one is rolling out new keystone nodes and syncing fernet
>> keys, all tokens issued would validated properly, with minimal extra
>> cost to support both (basically just a number of UUID tokens will need
>> to be parsed twice, once as Fernet, and once as UUID).
>>
>> Thoughts? I think doing this would make changing the default fairly
>> uncontroversial.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]cross pod L2 networking update in the design doc

2016-05-02 Thread joehuang
Hello,

As the cross pod L2 networking is in review, and after some discussion with guy 
working in L2GW project, I think we don't need to support DCI SDN Controller 
for cross pod L2 networking, L2GW will be good to provide different networking 
options. Therefore the design doc 
https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit
 was update, and will update according to the discussion in the review 
https://review.openstack.org/#/c/304540/. The description in the design doc is 
more general, but need to be accurate too as the guide.

You can also review and give comment on the design doc update.

Best Regards
Chaoyi Huang ( joehuang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][gate] Add a gating check job for precheck

2016-05-02 Thread Jeffrey Zhang
does Kolla really need a new job to run the precheck?

why not run the precheck before deploying the kolla in the current
logical?

On Tue, May 3, 2016 at 12:45 PM, Hui Kang  wrote:

> Steve,
> Ok, I created a bp for this. Feel free to edit
> https://blueprints.launchpad.net/kolla/+spec/gate-job-precheck
>
> Best regards,
> - Hui
>
> On Mon, May 2, 2016 at 11:50 PM, Steven Dake (stdake) 
> wrote:
> > Hui,
> >
> > I am planning to add a general gating blueprint with work items for the
> 24
> > gates we identified.  Just g o ahead and get started and I'll have the
> > gate blueprint ready to go by tomorrow.
> >
> > Regards
> > -steve
> >
> > On 5/2/16, 8:41 PM, "Hui Kang"  wrote:
> >
> >>Fellow kolla developers,
> >>I am wondering if anyone is working on adding a gate job for precheck.
> >>If not, I'd like to kick off the task by adding a bp. Any comment?
> >>Thanks.
> >>
> >>- Hui Kang
> >>IRC: huikang
> >>
>
> >>__
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Summit Summary?

2016-05-02 Thread Jason Rist
Hey Everyone - Forgive me if this has already been sent out - I looked
through all of my emails and didn't see something yet.  Can someone
provide a summary of TripleO from the summit, for instance some of the
output from the design sessions like the Trove, Magnum and some other
teams have done?  Those of us unable to attend are curious!

Thanks,
Jason
-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
openuc: +1.972.707.6408
mobile: +1.720.256.3933
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [OpenStack-Ansible] Mitaka Upgrade

2016-05-02 Thread Kevin Carter
Hi Wade, sorry for the late reply most of us have been traveling / afk for a 
bit for the summit. Regarding Liberty > Mitaka upgrades there are a few issues 
that we need to work out before we have a supported upgrade process.


Most notably we need to address:

* https://bugs.launchpad.net/openstack-ansible/+bug/1577245

* https://bugs.launchpad.net/openstack-ansible/+bug/1568029

* https://bugs.launchpad.net/openstack-ansible/+bug/1574019

* https://bugs.launchpad.net/openstack-ansible/+bug/1574303


It's likely if you hand fix these items you should be all taken care of. That 
said, work will be in full swing in the next week or so to get upgrades 100% 
squared away and there may be some other things that need be addressed before 
we'd say they're fully supported.

As for the RTFM part, it's not RTFM at all. We generally give our releases a 
bit of time to stabilize before announcing to the world how "wonderful our 
upgrade process is". Sadly, the issues you've found are a product of us not 
having worked everything out yet. We do have some documentation regarding the 
minor upgrade process as outlined here: [0] as well as other operational 
documentation that can be found here: [1]. Also if you're interested in working 
on the upgrades we'd love to have a chat regarding all of the things you've run 
into so that we can automate them away. I'd recommend joining in on the 
#openstack-ansible IRC channel and potentially raising issues on launchpad [2] 
for things we've not yet thought of to work on.

As for what you've done thus far, it seems like a sensible approach. The 
playbook execution should drop new code into place and start/restart all of the 
services. However the straggler process may have been related to this issue [ 
https://bugs.launchpad.net/openstack-ansible/+bug/1577245 ] if you can look 
through that and the other issues mentioned earlier we'd appreciate the 
feedback.

Sorry for the TL;DR I hope you're well. Ping us if you have questions.

[0] - 
http://docs.openstack.org/developer/openstack-ansible/install-guide/app-minorupgrade.html
[1] - 
http://docs.openstack.org/developer/openstack-ansible/install-guide/#operations
[2] - https://bugs.launchpad.net/openstack-ansible/

--

Kevin Carter
IRC: cloudnull


From: Wade Holler 
Sent: Thursday, April 28, 2016 11:33 AM
To: OpenStack Operators; OpenStack Development Mailing List (not for usage 
questions)
Subject: [openstack-dev] [Openstack-operators] [OpenStack-Ansible] Mitaka 
Upgrade

Hi All,

If this is RTFM please point me there and I apologize.

If not:

We are testing the Liberty to Mitaka transition with OSAD.  Could someone 
please advise if these were the correct general steps.

osad multinode (VMs/instances inside a osad cloud) built with latest 12.X 
liberty osad
1. save off config files: openstack_user_config.yml, user_variables,yml, 
...hostname..., inventory,  etc.
2. rm -rf /etc/openstack_deploy; rm -rf /opt/openstack-ansible
3. git clone -b stable/mitaka 
4. copy config files back in place
5. ./scripts/bootstrap-ansible.sh
6. openstack-ansible setup-everything.yml

That is the process we ran and it appears to have went well after adding the 
"-e rabbit_upgrade=true" flag.

Only straggler process that I found so far was a neutron-ns-metadata-proxy that 
was still running 12.X liberty code.  restarted the container and it didn't 
start again, but the 13.X mitaka version is running ( and was before shutdown ).

Is this the correct upgrade process or are there other steps / approaches that 
should be taken ?

Best Regards,
Wade


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][gate] Add a gating check job for precheck

2016-05-02 Thread Steven Dake (stdake)
Hui,

I am planning to add a general gating blueprint with work items for the 24
gates we identified.  Just g o ahead and get started and I'll have the
gate blueprint ready to go by tomorrow.

Regards
-steve

On 5/2/16, 8:41 PM, "Hui Kang"  wrote:

>Fellow kolla developers,
>I am wondering if anyone is working on adding a gate job for precheck.
>If not, I'd like to kick off the task by adding a bp. Any comment?
>Thanks.
>
>- Hui Kang
>IRC: huikang
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-02 Thread Adam Young

On 05/02/2016 08:07 PM, Rochelle Grober wrote:

But, the original spelling of the landing site is Plimoth Rock.  There were still highway 
signs up in the 70's directing folks to "Plimoth Rock"

--Rocky
Who should know about rocks ;-)

-Original Message-
From: Brian Haley [mailto:brian.ha...@hpe.com]
Sent: Monday, May 02, 2016 3:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Timeframe for naming the P release?
There seems to be some vaugaries about which aligns with which.  If the 
Q naming alignes with Boston, I think it could be fun.  The obvious one 
is Quincy, but Quabbin (Reservoir) and lots of other Algonquin names are 
in the running, too.


If it is P, there are many other options:

 * Palmer  
 * Paxton  
 * Peabody  
 * Pembroke 
   
 * Pepperell 
   
 * Peru  
 * Phillipston 
 * Pittsfield 
   
 * Plainville  Schools
   
 * Plymouth 
   
 * Plympton 
 * Princeton 
   
 * Provincetown 

And Providnece is, I think cliose enough for inclusion as well.  And 
that is just the towns.



Plymouth is the only County in Mass with a P name, but Penobscott ME 
used to be part of MA, and should probably be in the running as well.








On 05/02/2016 02:53 PM, Shamail Tahir wrote:

Hi everyone,

When will we name the P release of OpenStack?  We named two releases
simultaneously (Newton and Ocata) during the Mitaka release cycle.  This gave us
the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata) releases.

If we were to vote for the name of the P release soon (since the location is now
known) we would be able to have names associated with the current release cycle
(Newton), N+1 (Ocata), and N+2 (P).  This would also allow us to get back to
only voting for one name per release cycle but consistently have names for N,
N+1, and N+2.

Is there really going to be an option besides Plymouth?  I remember something
important happened there in 1620 ;-)

https://en.wikipedia.org/wiki/Plymouth,_Massachusetts

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][gate] Add a gating check job for precheck

2016-05-02 Thread Hui Kang
Fellow kolla developers,
I am wondering if anyone is working on adding a gate job for precheck.
If not, I'd like to kick off the task by adding a bp. Any comment?
Thanks.

- Hui Kang
IRC: huikang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][gate] Add a gating check job for precheck

2016-05-02 Thread Hui Kang
Steve,
Ok, I created a bp for this. Feel free to edit
https://blueprints.launchpad.net/kolla/+spec/gate-job-precheck

Best regards,
- Hui

On Mon, May 2, 2016 at 11:50 PM, Steven Dake (stdake)  wrote:
> Hui,
>
> I am planning to add a general gating blueprint with work items for the 24
> gates we identified.  Just g o ahead and get started and I'll have the
> gate blueprint ready to go by tomorrow.
>
> Regards
> -steve
>
> On 5/2/16, 8:41 PM, "Hui Kang"  wrote:
>
>>Fellow kolla developers,
>>I am wondering if anyone is working on adding a gate job for precheck.
>>If not, I'd like to kick off the task by adding a bp. Any comment?
>>Thanks.
>>
>>- Hui Kang
>>IRC: huikang
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-02 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2016-05-02 10:43:21 -0700:
> On 05/02/2016 11:51 AM, Mike Bayer wrote:
> > On 05/02/2016 07:38 AM, Matthieu Simonin wrote:
> >> As far as we understand the idea of an ORM is to hide the relational
> >> database with an Object oriented API.
> >
> > I actually disagree with that completely.  The reason ORMs are so
> > maligned is because of this misconception; developer attempts to use an
> > ORM so that they will need not have to have any awareness of their
> > database, how queries are constructed, or even its schema's design;
> > witness tools such as Django ORM and Rails ActiveRecord which promise
> > this.   You then end up with an inefficient and unextensible mess
> > because the developers never considered anything about how the database
> > works or how it is queried, nor do they even have easy ways to monitor
> > or control it while still making use of the tool.   There are many blog
> > posts and articles that discuss this and it is in general known as the
> > "object relational impedance mismatch".
> >
> > SQLAlchemy's success comes from its rejection of this entire philosophy.
> >   The purpose of SQLAlchemy's ORM is not to "hide" anything but rather
> > to apply automation to the many aspects of relational database
> > communication as well as row->object mapping that otherwise express
> > themselves in an application as either a large amount of repetitive
> > boilerplate throughout an application or as an awkward series of ad-hoc
> > abstractions that don't really do the job very well.   SQLAlchemy is
> > designed to expose both the schema design as well as the structure of
> > queries completely.   My talk at [1] goes into this topic in detail
> > including specific API architectures that facilitate this concept.
> >
> > It's for that reason that I've always rejected notions of attempting to
> > apply SQLAlchemy directly on top of a datastore that is explicitly
> > non-relational.   By doing so, you remove a vast portion of the
> > functionality that relational databases provide and there's really no
> > point in using a tool like SQLAlchemy that is very explicit about DDL
> > and SQL on top of that kind of database.
> >
> > To effectively put SQLAlchemy on top of a non-relational datastore, what
> > you really want to do is build an entire SQL engine on top of it.  This
> > is actually feasible; I was doing work for the now-defunct FoundationDB
> > (was bought by Apple) who had a very good implementation of
> > SQL-on-top-of-distributed keystore going, and the Cockroach and TiDB
> > projects you mention are definitely the most appropriate choice to take
> > if a certain variety of distribution underneath SQL is desired.
> 
> Well said, Mike, on all points above.
> 
> 
> 
> > But also, w.r.t. Cells there seems to be some remaining debate over why
> > exactly a distributed approach is even needed.  As others have posted, a
> > single MySQL database, replicated across Galera or not, scales just fine
> > for far more data than Nova ever needs to store.  So it's not clear why
> > the need for a dramatic rewrite of its datastore is called for.
> 
> Cells(v1) in Nova *already has* completely isolated DB/MQ for each cell, 
> and there are bunch of 
> duplicated-but-slightly-different-and-impossible-to-maintain code paths 
> in the scheduler and compute manager. Part of the cellsv2 effort is to 
> remove these duplicated code paths.
> 
> Cells are, as much as anything else, an answer to an MQ scaling problem, 
> less so an answer to a DB scaling problem. Having a single MQ bus for 
> tens of thousands of compute nodes is just not tenable -- at least with 
> the message passing patterns and architecture that we use today...
> 
> Finally, Cells also represent a failure domain for the control plane. If 
> a network partition occurs between a cell and the top-level API layer, 
> no other cell is affected by the disruption.
> 
> Now, what does all this mean with regards to whether to use a single 
> distributed database solution versus a single RDBMS versus many isolated 
> RDBMS instances per cell? Not sure. Arguments can be made for all three 
> approaches clearly. Depending on what folks' priorities are with regards 
> to simplicity, scale, and isolation of failure domains, the "right" 
> choice is tough to determine.
> 
> On the one hand, using a single distributed datastore like Cassandra for 
> everything would make things conceptually easy to reason about and make 
> OpenStack clouds much easier to deploy at scale.
> 
> On the other hand, porting things to Cassandra (or any other NoSQL 
> solution) would require a complete overhaul of the way *certain* data is 
> queried in the Nova subsystems. Examples of Cassandra's poor fit for 
> some types of data are quota and resource usage aggregation queries. 
> While Cassandra does support some aggregation via CQL in recent 
> Cassandra versions, Cassandra simply wasn't built for this kind of data 
> access pattern and 

[openstack-dev] [QA][RFC] Skip this week meeting

2016-05-02 Thread Masayuki Igawa
Hi QA guys,

We have very few topic this week to discuss. So I think we should skip
this week meeting.
Any thoughts?

Best Regards,
-- Masayuki Igawa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Removing Nova specifics from oslo.log

2016-05-02 Thread Ronald Bradford
Julien,

As mentioned I have been working thru the App Agnostic Parameters blueprint
that will cover review of context attributes and kwarg specific use of
oslo.log.
Presently due to legacy nova usage this current pattern is needed as it.

There was also a discussion at Austin with Michael Still regarding our work
with Nova to enhance usage of resource uuids that will enable some
subsequent deprecated usage of instance specific context and kwargs, but
this requires some pre-requisite work.


Regards

Ronald

Ronald Bradford

Web Site: http://ronaldbradford.com
LinkedIn:  http://www.linkedin.com/in/ronaldbradford
Twitter:@RonaldBradford 
Skype: RonaldBradford
GTalk: Ronald.Bradford
IRC: rbradfor


On Fri, Apr 15, 2016 at 4:11 AM, Julien Danjou  wrote:

> On Thu, Apr 14 2016, Joshua Harlow wrote:
>
> > Well not entirely IMHO. I think oslo should obviously be easy to use
> inside
> > openstack and as well outside, with preference to inside openstack
> (especially
> > for libraries that start with 'oslo.*'). When we can make it more useable
> > outside openstack (and without hurting the mission of openstack being the
> > primary target for 'oslo.*' libraries) then obviously we should try to
> do our
> > best to...
> >
> > Will all the oslo.* libraries as they exist be forever perfect at a
> given point
> > in time, no. We can always do better, which is where the community comes
> in :)
>
> I tend to agree with you Josh, though you might want to keep in mind
> that OpenStack is now composed of 54 teams and even more projects. So at
> this stage making libraries "for OpenStack" is going to become a very
> vague definition as the need for generic code is going to increase.
>
> --
> Julien Danjou
> /* Free Software hacker
>https://julien.danjou.info */
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Backwards compatibility policy

2016-05-02 Thread Ronald Bradford
This was a good session lead by Robert to fully understand the
repercussions of not ensuring backward compatibility guidelines.

In layman's terms, an Oslo API change made in Mitaka (the last release)
must remain fully backward compatible throughout the entire Newton release
(the current release).
A great reason is for all other OpenStack projects that do/may use and
adopt any Oslo changes from Mitaka.
If Oslo DID NOT maintain backwards compatibility throughout the full length
of the next cycle, then projects that had either implemented or plan to
implement Mitaka Oslo features may then have failing gate tests during this
cycle causing rework.
Oslo needs to remain stable during an entire cycle to ensure projects can
work consistently with Oslo libraries.

Thanks to Angus Lees (Gus) who in true debate form took the side of nay to
validate the proposal and enable a clear discussion (in the second of last
session).

Ronald



On Fri, Apr 29, 2016 at 2:13 PM, Robert Collins 
wrote:

> Yesterday, in
> https://etherpad.openstack.org/p/newton-oslo-backwards-compat-testing
> we actually reached consensus - well, 7/10 for and noone against -
> doing 1 cycle backwards compatibility in Oslo.
>
> That means that after we release a thing, code using it must not break
> (due to an Oslo change) until a release from the cycle *after* the
> next cycle..
>
> Concretely, add a thing at the start of L, we can break code using
> that exact API etc at the start of N. We could try to be clever and
> say 'Release of L, can break at the start of N', but 'release of L' is
> now much fuzzier than it used to be - particularly with independent
> releases of some things. So I'd rather have a really simple easy to
> define and grep for rule than try to optimise the window. Happy if
> someone else can come up with a way to optimise it.
>
> This is obviously not binding on any non-oslo projects: os-brick,
> python-neutronclient etc etc: though I believe folks in such projects
> are generally cautious as well.
>
> The key thing here is that we already have to do backwards compat to
> move folk off of a poor API onto a new one: all this policy says is
> that the grace period needs to cross release boundaries.
>
> -Rob
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Andreas Jaeger

On 05/02/2016 03:05 PM, Steven Dake (stdake) wrote:



On 5/1/16, 10:32 PM, "Swapnil Kulkarni"  wrote:


On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
 wrote:

Although it seems I'm in the minority, I am in favor of unified repo.

From: "Steven Dake (stdake)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Sunday, May 1, 2016 at 5:03 PM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: [openstack-dev] [kolla][kubernetes] One repo vs two

Ryan had rightly pointed out that when we made the original proposal 9am
morning we had asked folks if they wanted to participate in a separate
repository.

I don't think a separate repository is the correct approach based upon
one
off private conversations with folks at summit.  Many people from that
list
approached me and indicated they would like to see the work integrated
in
one repository as outlined in my vote proposal email.  The reasons I
heard
were:

Better integration of the community
Better integration of the code base
Doesn't present an us vs them mentality that one could argue happened
during
kolla-mesos
A second repository makes k8s a second class citizen deployment
architecture
without a voice in the full deployment methodology
Two gating methods versus one
No going back to a unified repository while preserving git history

I favor of the separate repositories I heard

It presents a unified workspace for kubernetes alone
Packaging without ansible is simpler as the ansible directory need not
be
deleted

There were other complaints but not many pros.  Unfortunately I failed
to
communicate these complaints to the core team prior to the vote, so now
is
the time for fixing that.

I'll leave it open to the new folks that want to do the work if they
want to
work on an offshoot repository and open us up to the possible problems
above.

If you are on this list:

Ryan Hallisey
Britt Houser

mark casey

Steven Dake (delta-alpha-kilo-echo)

Michael Schmidt

Marian Schwarz

Andrew Battye

Kevin Fox (kfox)

Sidharth Surana (ssurana)

  Michal Rostecki (mrostecki)

   Swapnil Kulkarni (coolsvap)

   MD NADEEM (mail2nadeem92)

   Vikram Hosakote (vhosakot)

   Jeff Peeler (jpeeler)

   Martin Andre (mandre)

   Ian Main (Slower)

Hui Kang (huikang)

Serguei Bezverkhi (sbezverk)

Alex Polvi (polvi)

Rob Mason

Alicja Kwasniewska

sean mooney (sean-k-mooney)

Keith Byrne (kbyrne)

Zdenek Janda (xdeu)

Brandon Jozsa (v1k0d3n)

Rajath Agasthya (rajathagasthya)
Jinay Vora
Hui Kang
Davanum Srinivas



Please speak up if you are in favor of a separate repository or a
unified
repository.

The core reviewers will still take responsibility for determining if we
proceed on the action of implementing kubernetes in general.

Thank you
-steve


_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




I am in the favor of having two separate repos and evaluating the
merge/split option later.
Though in the longer run, I would recommend having a single repo with
multiple stable deployment tools(maybe too early to comment views but
yeah)

Swapnil


Swapnil,

I gather this is what people want but this cannot be done with git and
maintain history.  To do this, we would have to "cp oldrepo/files to
newrepo/files" and the git history would be lost.  That is why choosing
two repositories up front is irreversible.


On the other hand: If you start with one and want to split later, you 
can use git-filter to create a copy of the repo with just the kubernetes 
files in it and set up a new repository with that content. So, you would 
keep the history...


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Docs] Installation Guide Plans for Newton

2016-05-02 Thread Andreas Jaeger

On 05/02/2016 04:58 PM, Jim Rollenhagen wrote:

On Mon, May 02, 2016 at 03:52:17PM +0200, Andreas Jaeger wrote:

On 05/02/2016 03:34 PM, Jim Rollenhagen wrote:

On Thu, Apr 28, 2016 at 10:38:13AM -0500, Lana Brindley wrote:

Greetings from Austin!

Yesterday we held the Install Guide planning workgroup. It was a full room, 
with some very robust discussion, and I want to thank everyone for 
participating and sharing their thoughts on this matter. We have some solid 
goals to achieve for Newton now.

The etherpad with the discussion is here: 
https://etherpad.openstack.org/p/austin-docs-workgroup-install

The original spec to create a way for projects to publish Instal Guides has now 
been merged, thanks to consensus being achieved at the Design Summit session, 
and a few minor edits: https://review.openstack.org/#/c/301284/ We can now 
begin work on this.

There is a new spec up to cover the remainder of the work to be done on the 
existing Install Guide: https://review.openstack.org/#/c/310588 This still 
needs some iterations to get it into shape (including determining the name we 
are going to use for the Install Guide). So, patches welcome.

Again, thank you to all the many people who attended the session, and those who 
have found me this week and discussed the future of the Install Guide. I'm 
confident that this plan has broad acceptance, is achievable during the Newton 
cycle, and it moves the documentation project forward.

Feedback is, as always, very welcome.


One question on this: the spec says that projects may publish to
docs.openstack.org/project-install-guide/RELEASE/SERVICE/ (yay!).

Some projects have intermediate releases during the cycle; for example
in Mitaka, ironic had 4.3.0, 5.0.0, and 5.1.0 releases, with 5.1.0 being
the base for stable/mitaka.

Is it okay/expected that projects with this model publish an install
guide for each intermediate release as well? Again, using mitaka, ironic
would publish:

docs.openstack.org/project-install-guide/4.3.0/ironic/
docs.openstack.org/project-install-guide/5.0.0/ironic/
docs.openstack.org/project-install-guide/5.1.0/ironic/
docs.openstack.org/project-install-guide/mitaka/ironic/

Thanks for all the hard work on this! :)



Let me ask this differently:

Let's assume we have one index file for the mitaka version of all install
guides which gets published at mitaka release time. Where should we link to?

Does that help?


Maybe? Clearly the index for Mitaka would point to the final mitaka
install guide (which would be published from stable/mitaka, and accept
backports). However, if we release a new feature in 4.3.0 for example,
and update our install guide, we should publish that with the release.
We wouldn't backport it to Liberty (because the feature didn't exist in
Liberty), but it's still half a cycle until the Mitaka guide is
published.

FWIW, we currently do this for the /developer endpoint:
http://docs.openstack.org/developer/ironic/4.3.0/deploy/install-guide.html

I'd really like to continue doing this. :)


So, if we publish *now* - let's say
install-guide/liberty/index.html
install-guide/mitaka/index.html
install-guide/newton/index.html

Then, ironic would publish the 4.3.0 documentation and the 5.1 - and the 
5.1 would be referenced from the mitaka/index.html page. But if 4.3.0 is 
published, on which index page should it appear? Or is that just 
referenced from releases?


So, I'm fine with you continuing with publishing it - I'm more concerned 
on what will be in which index... Have extra versions is not a problem IMHO,


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Reminder, no IRC meeting this week

2016-05-02 Thread Joshua Harlow

Just a reminder to folk,

I'll send out the summit notes today and get releases going but as we 
just had a summit last week, a meeting today on IRC doesn't seem needed 
(and most people may not be back yet from the summit anyway),


Have a great week folks (and it was great seeing everyone)!

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Leadership training dates - please confirm attendance

2016-05-02 Thread Alexis Monville
On Fri, Apr 22, 2016 at 4:57 PM, Colette Alexander
 wrote:
> On Thu, Apr 21, 2016 at 10:42 AM, Doug Hellmann  wrote:
>>
>> Excerpts from Colette Alexander's message of 2016-04-21 08:07:52 -0700:
>>
>> >
>> > Hi everyone,
>> >
>> > Just checking in on this - if you're a current or past member of the TC and
>> > haven't yet signed up on the etherpad [0] and would like to attend
>> > training, please do so by tomorrow if you can! If you're waiting on travel
>> > approval or something else before you confirm, but want me to hold you a
>> > spot, just ping me on IRC and let me know.
>> >
>> > If you'd like to go to leadership training and you're *not* a past or
>> > current TC member, stay tuned - I'll know about free spots and will send
>> > out information during the summit next week.
>> >
>> > Thank you!
>> >
>> > -colette/gothicmindfood
>> >
>> > [0] https://etherpad.openstack.org/p/Leadershiptraining
>>
>> I've been waiting to have a chance to confer with folks in Austin. Are
>> we under a deadline to get a head-count?
>
> Only in the sense that I wanted to give non-TC folks some decent lead
> time with travel arrangements if there are free spots and they want to
> attend. Let's push it back a week if you all need the time to confer.
>
>

Just a quick note on that training. I didn't follow the training
myself (yet) so what I will say is only what I have heard.

Zingerman is a company quoted in nearly every book that consider
alternative ways of managing an organization.
I think sharing this experience will be be really valuable for the TC
and board members, and for OpenStack as a whole (and the organizations
in which they are).
I hope that others will have the opportunity to join and share this experience.

I agree with Thierry that a half day to discuss how to turn the
training into action could be important (if not mandatory ;) ).


Let us know how it was :)

Alexis

PS: while you will be in Ann Arbor you could also visit a joyful it
company: https://www.menloinnovations.com/by-visiting/menlo-factory-tour

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-health][QA] We need your feedbacks

2016-05-02 Thread Andrea Frittoli
I personally like Openstack Health UI/UX. I use it a lot a an initial
source of information.
Thanks to the OH team for all the great work on this!

And +1 to the new "feedback" link on the top and bottom of every page :)

A few thoughts on my experience with OH.

It can take a few clicks in OH to get to the view one's looking for - and
some views are slow to load.
It is much better now that the URL in the browser is always updated to
reflect the current view - very useful for bookmarking and sharing.
One limitation to that is the sorting of columns is not reflected in the
URL [0].

If I want to work on getting a new job in the gate, I'll create it as a
non-voting check or experimental job first, and there's no data for them in
OH yet.

If I want to check why an existing job is failing (e.g a periodic job [1])
I can see which tests are failing and how often, which is a great starting
point for my work, but if I click on any of the tests the job filtering is
lost. Seeing that a test is successful in some jobs and it fails in others
is good data, but I need to be able to see where are the failures coming
from. Ideas on how to implement that are:
- have the ability of applying multiple metadata based filters to a view
- have a list of the available metadata, and view the distribution of value
on each metadata field for the current view - a la ELK. So for instance if
I select so see only failures in  a graph, show the distribution only on
failures. If I click on a data point, show the distribution only for that
point. That way I can see where certain deviations are coming from
- get the list of logs associate to a certain point in the graph for
further analysis

Injecting the metadata in the job name can help overcome filtering
abilities a bit, but we cannot inject metadata in the test names, so at
test level the context is always lost.

I use OpenStack Health downstream as well, to track results from a
combination of test systems (CI and more), and the fact that both the DB
schema and the dashboard are metadata agnostic is very useful - I have
extra metadata which is inject in certain jobs, like test rack name and
product version. But again I need the ability to select on multiple
 metadata keys at once, so I can produce test views / reports for specific
software versions and test environments.

andrea

[0] https://bugs.launchpad.net/openstack-health/+bug/1577420
[1]
http://status.openstack.org/openstack-health/#/job/periodic-tempest-dsvm-neutron-full-test-accounts-master?groupKey=project=hour=2016-05-02T14:31:26.423Z




On Fri, Apr 29, 2016 at 8:15 PM Masayuki Igawa 
wrote:

> Hi,
>
> Now, we are making OpenStack-Health[1] which is a dashboard for
> visualizing test results of OpenStack CI jobs. This is heavily under
> development mode but works, you can use it now.
>
> We'd like to get your feedback to make it better UI/UX. So, if you
> have any comments or feedbacks, please feel free to file it as a
> bug[2] and/or submit patches[3].
>
> This is a kind of advertisements, however, I think openstack-health
> could be useful for all projects through knowing the development
> status of the OpenStack projects.
>
>
> [1] http://status.openstack.org/openstack-health/
> [2] https://bugs.launchpad.net/openstack-health
> [3] http://git.openstack.org/cgit/openstack/openstack-health
>
> Best Regards,
> -- Masayuki Igawa
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Docs] Installation Guide Plans for Newton

2016-05-02 Thread Andreas Jaeger

On 05/02/2016 03:34 PM, Jim Rollenhagen wrote:

On Thu, Apr 28, 2016 at 10:38:13AM -0500, Lana Brindley wrote:

Greetings from Austin!

Yesterday we held the Install Guide planning workgroup. It was a full room, 
with some very robust discussion, and I want to thank everyone for 
participating and sharing their thoughts on this matter. We have some solid 
goals to achieve for Newton now.

The etherpad with the discussion is here: 
https://etherpad.openstack.org/p/austin-docs-workgroup-install

The original spec to create a way for projects to publish Instal Guides has now 
been merged, thanks to consensus being achieved at the Design Summit session, 
and a few minor edits: https://review.openstack.org/#/c/301284/ We can now 
begin work on this.

There is a new spec up to cover the remainder of the work to be done on the 
existing Install Guide: https://review.openstack.org/#/c/310588 This still 
needs some iterations to get it into shape (including determining the name we 
are going to use for the Install Guide). So, patches welcome.

Again, thank you to all the many people who attended the session, and those who 
have found me this week and discussed the future of the Install Guide. I'm 
confident that this plan has broad acceptance, is achievable during the Newton 
cycle, and it moves the documentation project forward.

Feedback is, as always, very welcome.


One question on this: the spec says that projects may publish to
docs.openstack.org/project-install-guide/RELEASE/SERVICE/ (yay!).

Some projects have intermediate releases during the cycle; for example
in Mitaka, ironic had 4.3.0, 5.0.0, and 5.1.0 releases, with 5.1.0 being
the base for stable/mitaka.

Is it okay/expected that projects with this model publish an install
guide for each intermediate release as well? Again, using mitaka, ironic
would publish:

docs.openstack.org/project-install-guide/4.3.0/ironic/
docs.openstack.org/project-install-guide/5.0.0/ironic/
docs.openstack.org/project-install-guide/5.1.0/ironic/
docs.openstack.org/project-install-guide/mitaka/ironic/

Thanks for all the hard work on this! :)



Let me ask this differently:

Let's assume we have one index file for the mitaka version of all 
install guides which gets published at mitaka release time. Where should 
we link to?


Does that help?

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][RFC] Skip this week meeting

2016-05-02 Thread Andrea Frittoli
I don't have anything urgent so fine with me.

Andrea

On Mon, 2 May 2016, 3:04 p.m. Masayuki Igawa, 
wrote:

> Hi QA guys,
>
> We have very few topic this week to discuss. So I think we should skip
> this week meeting.
> Any thoughts?
>
> Best Regards,
> -- Masayuki Igawa
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Ryan Hallisey
>I am in the favor of having two separate repos and evaluating the
>merge/split option later.
>Though in the longer run, I would recommend having a single repo with
>multiple stable deployment tools(maybe too early to comment views but
>yeah)
>
>Swapnil

>Swapnil,

>I gather this is what people want but this cannot be done with git and
>maintain history.  To do this, we would have to "cp oldrepo/files to
>newrepo/files" and the git history would be lost.  That is why choosing
>two repositories up front is irreversible.

When the community votes to either merge or not to merge down the road, it 
won't be far
enough in the future that losing the git history will be catastrophic. There 
won't
even be a release for the Kubernetes solution at that point. Plus it's still up 
in
the air whether the history gets lost at all because it's not a guarantee the 
community
votes to merge it.

-Ryan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Michal Rostecki
It seems for me that we have a dilemma between security (abstaining from 
creating a core group which may overuse their rights in kolla repo) and 
usability (not having multiple repos, which we experienced badly in the 
kolla-mesos era).


I don't find the argument about having k8s ecosystem in separate repo 
solid. Of course creating separate python package, being the standalone 
CLI, is not problematic. But the deployment of k8s itself for the 
development environment (i.e. Vagrant) or CI may be painful.


When developing kolla-mesos, we created ansible playbooks for deploying 
Mesos. We had to duplicate the kolla-ansible script and kolla_docker 
Ansible module. We also duplicated a lot from Vagrantfile. That was 
because any officially available method of deploying Mesos wasn't 
elastic enough to meet the requirements of kolla-mesos development. And 
I'm almost sure that the similar situation will be with k8s - I don't 
see any way to reuse official kube-up scripts or Salt manigests for the 
kolla-k8s needs.


Second thing, we implemented our own Dockerfiles for Mesos. I don't know 
whether it will be needed for k8s, maybe yes, maybe not. But if yes, 
then handling the build of underlay infra containers from kolla repo for 
the needs of kolla-kubernetes looks overcomplicated, exactly as it was 
between kolla and kolla-mesos.


Therefore, I'm in favor of having one repository. I prefer to monitor 
actions of kolla-k8s cores in the short period of time than begin with 
duplicates and technical debt since the beginning of development.


Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-02 Thread Ryan Hallisey
+1 to start kolla-kubernetes work.

- Original Message -
From: "Swapnil Kulkarni" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, May 2, 2016 12:59:40 AM
Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes 
repository management proposal up for vote

On Mon, May 2, 2016 at 10:08 AM, Vikram Hosakote (vhosakot)
 wrote:
> A separate repo will land us in the same spot as we had with kolla-mesos
> originally.  We had all kinds of variance in the implementation.
>
> I’m in favor of a single repo.
>
> +1 for the single repo.
>

I agree with you Vikram, but we should consider the bootstrapping
requirements for new deployment technologies and learn from our
failures with kolla-mesos.

At the same time, it will help us evaluate the deployment technologies
going ahead without distrupting the kolla repo which we can treat as a
repo with stable images & associated deployment tools.

> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: Vikram Hosakote 
> Date: Sunday, May 1, 2016 at 11:36 PM
>
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
> kolla-kubernetes repository management proposal up for vote
>
> Please add me too to the list!
>
> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: Michał Jastrzębski 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Saturday, April 30, 2016 at 9:58 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
> kolla-kubernetes repository management proposal up for vote
>
> Add me too please Steven.
>
> On 30 April 2016 at 09:50, Steven Dake (stdake)  wrote:
>
> Fellow core reviewers,
>
> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
> Kolla session.  The etherpad documents the folks interested and discussion
> at summit[1].
>
> This proposal is mostly based upon a combination of several discussions at
> open design meetings coupled with the kubernetes underlay discussion.
>
> The proposal (and what we are voting on) is as follows:
>
> Folks in the following list will be added to a kolla-k8s-core group.
>
>   This kolla-k8s-core group will be responsible for code reviews and code
> submissions to the kolla repository for the /kubernetes top level directory.
> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
> with a (-2) votes to TLD directories other then kubernetes will be handled
> on a case by case basis with several "training warnings" followed by removal
> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
> subgroup of the kolla-core reviewer team, which means they in effect have
> all of the ACL access as the existing kolla repository.  I think it is
> better in this case to trust these individuals to the right thing and only
> approve changes for the kubernetes directory until proposed for the
> kolla-core reviewer group where they can gate changes to any part of the
> repository.
>
> Britt Houser
>
> mark casey
>
> Steven Dake (delta-alpha-kilo-echo)
>
> Michael Schmidt
>
> Marian Schwarz
>
> Andrew Battye
>
> Kevin Fox (kfox)
>
> Sidharth Surana (ssurana)
>
>   Michal Rostecki (mrostecki)
>
>Swapnil Kulkarni (coolsvap)
>
>MD NADEEM (mail2nadeem92)
>
>Vikram Hosakote (vhosakot)
>
>Jeff Peeler (jpeeler)
>
>Martin Andre (mandre)
>
>Ian Main (Slower)
>
> Hui Kang (huikang)
>
> Serguei Bezverkhi (sbezverk)
>
> Alex Polvi (polvi)
>
> Rob Mason
>
> Alicja Kwasniewska
>
> sean mooney (sean-k-mooney)
>
> Keith Byrne (kbyrne)
>
> Zdenek Janda (xdeu)
>
> Brandon Jozsa (v1k0d3n)
>
> Rajath Agasthya (rajathagasthya)
>
>
> If you already are in the kolla-core review team, you won't be added to the
> kolla-k8s-core team as you will already have the necessary ACLs to do the
> job.  If you feel you would like to join this initial bootstrapping process,
> please add your name to the etherpad in [1].
>
> After 8 weeks (July 15h), folks that have not been actively reviewing or
> committing code will be removed from the kolla-k8s-core group.  We will use
> the governance repository metrics for team size [2] which is either 30
> reviews over 6 months (in this case, 10 reviews), or 6 commits over 6 months
> (in this case 2 commits) to the repository.  Folks that don't meet the
> qualifications are still welcome to commit to the repository and contribute
> code or documentation but will lose approval rights on patches.
>
> The kubernetes codebase will be maintained in the
> https://github.com/openstack/kolla repository under the kubernees top level
> directory.  Contributors that become 

[openstack-dev] [telemetry] Newton Summit recap

2016-05-02 Thread Julien Danjou
Hi,

I wrote a quick blog post about what have been discussed during the
summit for Telemetry if you're interested:

  https://julien.danjou.info/blog/2016/openstack-summit-newton-austin-telemetry

Feel free to shoot questions if any,

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Requirements for becoming approved official project

2016-05-02 Thread joehuang
Hi, Shinobu,

Many thanks for the check for Tricircle to be an OpenStack project, and Thierry 
for the clarification. glad to know that we are close to OpenStack offical 
project criteria. 

Let's discuss the initial PTL election in weekly meeting, and start initial PTL 
election after that if needed.

Best Regards
Chaoyi Huang ( joehuang )

From: Shinobu Kinjo [shinobu...@gmail.com]
Sent: 02 May 2016 18:48
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle] Requirements for becoming approved 
official project

Hi Thierry,

On Mon, May 2, 2016 at 5:45 PM, Thierry Carrez  wrote:
> Shinobu Kinjo wrote:
>>
>> I guess, it's usable. [1] [2] [3], probably and more...
>>
>> The reason why still I can just guess is that there is a bunch of
>> documentations!!
>> It's one of great works but too much.
>
>
> We have transitioned most of the documentation off the wiki, but there are
> still a number of pages that are not properly deprecated.
>
>> [1] https://wiki.openstack.org/wiki/PTL_Guide
>
>
> This is now mostly replaced by the project team guide, so I marked this one
> as deprecated.

Honestly, frankly we should clean up something deprecated since there
is a bunch of documentations -;
It's really hard to read every singe piece...

Anyway better than nothing though.

>
> As far as initial election goes, if there is a single candidate no need to
> organize a formal election. If you need to run one, you can use CIVS
> (http://civs.cs.cornell.edu/) since that is what we use for the official
> elections: https://wiki.openstack.org/wiki/Election_Officiating_Guidelines

Thank you for pointing it out.
That is really good advice.

>
> Regards,
>
> --
> Thierry Carrez (ttx)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage meeting on May 4 - Newton design

2016-05-02 Thread Afek, Ifat (Nokia - IL)
Hi,

Next Vitrage weekly meeting will be held on the coming Wednesday, May 4, at 
9:00 UTC. 

Agenda:

* Ohad and alexey_weyl will update on Austin Summit and their 
presentations[1][2]

* We will start working on Vitrage Newton plan. I opened an etherpad for this 
purpose[3], please use it to add your ideas (I wrote down a few). You are also 
welcome to suggest new blueprints[4].

Thanks,
Ifat.

[1] https://www.youtube.com/watch?v=9Qw5coTLgMo 
[2] https://www.youtube.com/watch?v=ey68KNKXc5c 
[3] https://etherpad.openstack.org/p/vitrage-newton-planning 
[4] https://blueprints.launchpad.net/vitrage 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Jeff Peeler
On Mon, May 2, 2016 at 9:50 AM, Andreas Jaeger  wrote:
> On 05/02/2016 03:05 PM, Steven Dake (stdake) wrote:
>> Swapnil,
>>
>> I gather this is what people want but this cannot be done with git and
>> maintain history.  To do this, we would have to "cp oldrepo/files to
>> newrepo/files" and the git history would be lost.  That is why choosing
>> two repositories up front is irreversible.
>
>
> On the other hand: If you start with one and want to split later, you can
> use git-filter to create a copy of the repo with just the kubernetes files
> in it and set up a new repository with that content. So, you would keep the
> history...

Andreas, could you also not do the reverse with two separate
repositories and merge them to preserve the history using subtree
merging [1]? If I'm not correct, I'm sure repository merging can be
done somehow as Linus merged gitk into git years ago with history
preservation. So really, I don't think git history should be part of
this discussion.

[1] 
https://www.kernel.org/pub/software/scm/git/docs/howto/using-merge-subtree.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-02 Thread Jeff Peeler
Also +1 for working on kolla-kubernetes. (Please read this thread if
you haven't yet):

http://lists.openstack.org/pipermail/openstack-dev/2016-May/093575.html

On Mon, May 2, 2016 at 10:56 AM, Ryan Hallisey  wrote:
> +1 to start kolla-kubernetes work.
>
> - Original Message -
> From: "Swapnil Kulkarni" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Monday, May 2, 2016 12:59:40 AM
> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra] 
> kolla-kubernetes repository management proposal up for vote
>
> On Mon, May 2, 2016 at 10:08 AM, Vikram Hosakote (vhosakot)
>  wrote:
>> A separate repo will land us in the same spot as we had with kolla-mesos
>> originally.  We had all kinds of variance in the implementation.
>>
>> I’m in favor of a single repo.
>>
>> +1 for the single repo.
>>
>
> I agree with you Vikram, but we should consider the bootstrapping
> requirements for new deployment technologies and learn from our
> failures with kolla-mesos.
>
> At the same time, it will help us evaluate the deployment technologies
> going ahead without distrupting the kolla repo which we can treat as a
> repo with stable images & associated deployment tools.
>
>> Regards,
>> Vikram Hosakote
>> IRC: vhosakot
>>
>> From: Vikram Hosakote 
>> Date: Sunday, May 1, 2016 at 11:36 PM
>>
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
>> kolla-kubernetes repository management proposal up for vote
>>
>> Please add me too to the list!
>>
>> Regards,
>> Vikram Hosakote
>> IRC: vhosakot
>>
>> From: Michał Jastrzębski 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Saturday, April 30, 2016 at 9:58 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
>> kolla-kubernetes repository management proposal up for vote
>>
>> Add me too please Steven.
>>
>> On 30 April 2016 at 09:50, Steven Dake (stdake)  wrote:
>>
>> Fellow core reviewers,
>>
>> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
>> Kolla session.  The etherpad documents the folks interested and discussion
>> at summit[1].
>>
>> This proposal is mostly based upon a combination of several discussions at
>> open design meetings coupled with the kubernetes underlay discussion.
>>
>> The proposal (and what we are voting on) is as follows:
>>
>> Folks in the following list will be added to a kolla-k8s-core group.
>>
>>   This kolla-k8s-core group will be responsible for code reviews and code
>> submissions to the kolla repository for the /kubernetes top level directory.
>> Individuals in kolla-k8s-core that consistently approve (+2) or disapprove
>> with a (-2) votes to TLD directories other then kubernetes will be handled
>> on a case by case basis with several "training warnings" followed by removal
>> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as a
>> subgroup of the kolla-core reviewer team, which means they in effect have
>> all of the ACL access as the existing kolla repository.  I think it is
>> better in this case to trust these individuals to the right thing and only
>> approve changes for the kubernetes directory until proposed for the
>> kolla-core reviewer group where they can gate changes to any part of the
>> repository.
>>
>> Britt Houser
>>
>> mark casey
>>
>> Steven Dake (delta-alpha-kilo-echo)
>>
>> Michael Schmidt
>>
>> Marian Schwarz
>>
>> Andrew Battye
>>
>> Kevin Fox (kfox)
>>
>> Sidharth Surana (ssurana)
>>
>>   Michal Rostecki (mrostecki)
>>
>>Swapnil Kulkarni (coolsvap)
>>
>>MD NADEEM (mail2nadeem92)
>>
>>Vikram Hosakote (vhosakot)
>>
>>Jeff Peeler (jpeeler)
>>
>>Martin Andre (mandre)
>>
>>Ian Main (Slower)
>>
>> Hui Kang (huikang)
>>
>> Serguei Bezverkhi (sbezverk)
>>
>> Alex Polvi (polvi)
>>
>> Rob Mason
>>
>> Alicja Kwasniewska
>>
>> sean mooney (sean-k-mooney)
>>
>> Keith Byrne (kbyrne)
>>
>> Zdenek Janda (xdeu)
>>
>> Brandon Jozsa (v1k0d3n)
>>
>> Rajath Agasthya (rajathagasthya)
>>
>>
>> If you already are in the kolla-core review team, you won't be added to the
>> kolla-k8s-core team as you will already have the necessary ACLs to do the
>> job.  If you feel you would like to join this initial bootstrapping process,
>> please add your name to the etherpad in [1].
>>
>> After 8 weeks (July 15h), folks that have not been actively reviewing or
>> committing code will be removed from the kolla-k8s-core group.  We will use
>> the governance repository metrics for team size [2] which is either 30
>> reviews over 6 months (in this case, 10 reviews), or 6 commits 

Re: [openstack-dev] [glance] Interest in contributing to OpenStack

2016-05-02 Thread Nikhil Komawar
Hi Djimeli,

Thank you for expressing interest in contributing to projects in Glance.
Glance community always welcomes passionate developers to contribute to
our open source and open community platform. It's great to see that
you've participated in GSOC and looking forward to contributing to
OpenStack.

You're very right about the information on the projects that have been
proposed for GSOC and Outreachy; at the same time you may be aware that
OpenStack is not in the GSOC 2016 list. So, are you looking to
contribute to the projects on your own time/part time? The project
"extended support for requests library" has been assigned to a selected
Outreachy intern; wiki will be updated as we officially complete the
formalities and the Glare client support is something that will need a
few weeks of time to get started (right around the time the internships
are supposed to start). Both of them are not a priority for Glance
currently as there are other or overarching items that need attention.
Although, I have 3-4 other small projects in Glance that need
contribution; they are not documented yet but I want only the most
passionate developers to work on them as they may need a bit of
background research.

I will soon post the project priorities if that is something you are
interested in knowing as well. Just hangout with us at #openstack-glance
channel on Freenode; at least that's where most of our developers
interact. I would recommend picking up a couple of small bugs and start
working on them before even diving into the projects to get a better
sense of the code base, design and the OpenStack way of working.

Always feel free to reach out; I prefer IRC and my nick on freenode is
nikhil.

Cheers!

On 5/1/16 5:28 AM, Djimeli Konrad wrote:
> Hello,
>
> My name is Djimeli Konrad a second year computer science student from
> the University of Buea Cameroon, Africa. I am GSoC 2015 participant,
> and I have also worked on  some open-source projects on github
> (http://github.com/djkonro) and sourceforge
> (https://sourceforge.net/u/konrado/profile/). I am very passionate
> about cloud development, distributed systems, virtualization and I
> would like to contribute to OpenStack.
>
> I have to gone through the OpenStack GSoC2016/Outreachy2016 projects
> (https://wiki.openstack.org/wiki/Internship_ideas ) and I am
> interested the in working on "Glance - Extended support for requests
> library" and "Glance - Develop a python based GLARE (GLance Artifacts
> REpository) client library and shell API". I would like to get some
> detail regarding the projects. Is this a priority project?, what is
> the expected outcome? and what are some starting points?.
>
> I am proficient with C, C++ and Python, and I have successfully build
> and setup OpenStack, using devstack.
>
> Thanks
> Konrad
> https://www.linkedin.com/in/konrad-djimeli-69809b97
>
>


-- 

Thanks,
Nikhil



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-health][QA] We need your feedbacks

2016-05-02 Thread Franklin Naval
On Mon, May 2, 2016 at 10:26 AM, Andrea Frittoli 
wrote:

> I personally like Openstack Health UI/UX. I use it a lot a an initial
> source of information.
> Thanks to the OH team for all the great work on this!
>

+1 - This is a really awesome tool.  I'll be writing up issues in launchpad
as I encounter them.
Thank you so much for this, Masayuki and OH team!

Cheers,
-Franklin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-02 Thread Edward Leafe
On May 2, 2016, at 10:51 AM, Mike Bayer  wrote:

>> Concretely, we think that there are three possible approaches:
>> 1) We can use the SQLAlchemy API as the common denominator between a 
>> relational and non-relational implementation of the db.api component. These 
>> two implementation could continue to converge by sharing a large amount of 
>> code.
>> 2) We create a new non-relational implementation (from scratch) of the 
>> db.api component. It would require probably more work.
>> 3) We are also studying a last alternative: writing a SQLAlchemy engine 
>> that targets NewSQL databases (scalability + ACID):
>>  - https://github.com/cockroachdb/cockroach
>>  - https://github.com/pingcap/tidb
> 
> Going with a NewSQL backend is by far the best approach here.   That way, 
> very little needs to be reinvented and the application's approach to data 
> doesn't need to dramatically change.

I’m glad that Matthieu responded, but I did want to emphasize one thing: of 
*course* this isn’t an ideal approach, but it *is* a practical one. The biggest 
problem in any change like this isn’t getting it to work, or to perform better, 
or anything else except being able to make the change while disrupting as 
little of the existing code as possible. Taking an approach that would be more 
efficient would be a non-starter since it wouldn’t provide a clean upgrade path 
for existing deployments.

By getting this working without ripping out all of the data models that 
currently exist is an amazing feat. And if by doing so it shows that a 
distributed database is indeed possible, it’s done more than anything else that 
has ever been discussed in the past few years. 


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-02 Thread Mike Bayer



On 05/02/2016 07:38 AM, Matthieu Simonin wrote:



As far as we understand the idea of an ORM is to hide the relational database 
with an Object oriented API.


I actually disagree with that completely.  The reason ORMs are so 
maligned is because of this misconception; developer attempts to use an 
ORM so that they will need not have to have any awareness of their 
database, how queries are constructed, or even its schema's design; 
witness tools such as Django ORM and Rails ActiveRecord which promise 
this.   You then end up with an inefficient and unextensible mess 
because the developers never considered anything about how the database 
works or how it is queried, nor do they even have easy ways to monitor 
or control it while still making use of the tool.   There are many blog 
posts and articles that discuss this and it is in general known as the 
"object relational impedance mismatch".


SQLAlchemy's success comes from its rejection of this entire philosophy. 
 The purpose of SQLAlchemy's ORM is not to "hide" anything but rather 
to apply automation to the many aspects of relational database 
communication as well as row->object mapping that otherwise express 
themselves in an application as either a large amount of repetitive 
boilerplate throughout an application or as an awkward series of ad-hoc 
abstractions that don't really do the job very well.   SQLAlchemy is 
designed to expose both the schema design as well as the structure of 
queries completely.   My talk at [1] goes into this topic in detail 
including specific API architectures that facilitate this concept.


It's for that reason that I've always rejected notions of attempting to 
apply SQLAlchemy directly on top of a datastore that is explicitly 
non-relational.   By doing so, you remove a vast portion of the 
functionality that relational databases provide and there's really no 
point in using a tool like SQLAlchemy that is very explicit about DDL 
and SQL on top of that kind of database.


To effectively put SQLAlchemy on top of a non-relational datastore, what 
you really want to do is build an entire SQL engine on top of it.  This 
is actually feasible; I was doing work for the now-defunct FoundationDB 
(was bought by Apple) who had a very good implementation of 
SQL-on-top-of-distributed keystore going, and the Cockroach and TiDB 
projects you mention are definitely the most appropriate choice to take 
if a certain variety of distribution underneath SQL is desired.


 Concerning SQLAlchemy,

relationnal aspect of the underlying database may also be used by the user but 
we observed that in Nova, most
of the db interactions are written in an Object-oriented style (few queries are 
using SQL),
thus we don't think that Nova requires a relational database, it just requires 
an object oriented abstraction to manipulate a database.


Well IMO that's actually often a problem.  My goal across Openstack 
projects in general is to allow them to make use of SQL more effectively 
than they do right now; for example, in Neutron I am helping them to 
move a block of code that inefficiently needs to load a block of data 
into memory, scan it for CIDR overlaps, and then push data back out. 
This approach prevents it from performing a single UPDATE statement and 
ushers in the need for pessimistic locking against concurrent 
transactions.  Instead, I've written for them a simple stored function 
proof-of-concept [2] that will allow the entire operation to be 
performed on the database side alone in a single statement.  Wins like 
these are much less feasible if not impossible when a project decides it 
wants to split its backend store between dramatically different 
databases which don't offer such features.




Concretely, we think that there are three possible approaches:
 1) We can use the SQLAlchemy API as the common denominator between a 
relational and non-relational implementation of the db.api component. These two 
implementation could continue to converge by sharing a large amount of code.
 2) We create a new non-relational implementation (from scratch) of the 
db.api component. It would require probably more work.
 3) We are also studying a last alternative: writing a SQLAlchemy engine 
that targets NewSQL databases (scalability + ACID):
  - https://github.com/cockroachdb/cockroach
  - https://github.com/pingcap/tidb


Going with a NewSQL backend is by far the best approach here.   That 
way, very little needs to be reinvented and the application's approach 
to data doesn't need to dramatically change.


But also, w.r.t. Cells there seems to be some remaining debate over why 
exactly a distributed approach is even needed.  As others have posted, a 
single MySQL database, replicated across Galera or not, scales just fine 
for far more data than Nova ever needs to store.  So it's not clear why 
the need for a dramatic rewrite of its datastore is called for.



[1] 

Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Fox, Kevin M
One thing we didn't talk about too much at the summit is the part of the spec 
that says we will reuse a bunch of ansible stuff to generate configs for the 
k8s case...

Do we believe that code would be minimal and not impact separate repo's much or 
is the majority of the work in the end going to be focused there? If most of 
the code ends up landing there, then its probably not worth splitting?

Thanks,
Kevin

From: Steven Dake (stdake) [std...@cisco.com]
Sent: Monday, May 02, 2016 6:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two

On 5/1/16, 10:32 PM, "Swapnil Kulkarni"  wrote:

>On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
> wrote:
>> Although it seems I'm in the minority, I am in favor of unified repo.
>>
>> From: "Steven Dake (stdake)" 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Sunday, May 1, 2016 at 5:03 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>> Ryan had rightly pointed out that when we made the original proposal 9am
>> morning we had asked folks if they wanted to participate in a separate
>> repository.
>>
>> I don't think a separate repository is the correct approach based upon
>>one
>> off private conversations with folks at summit.  Many people from that
>>list
>> approached me and indicated they would like to see the work integrated
>>in
>> one repository as outlined in my vote proposal email.  The reasons I
>>heard
>> were:
>>
>> Better integration of the community
>> Better integration of the code base
>> Doesn't present an us vs them mentality that one could argue happened
>>during
>> kolla-mesos
>> A second repository makes k8s a second class citizen deployment
>>architecture
>> without a voice in the full deployment methodology
>> Two gating methods versus one
>> No going back to a unified repository while preserving git history
>>
>> I favor of the separate repositories I heard
>>
>> It presents a unified workspace for kubernetes alone
>> Packaging without ansible is simpler as the ansible directory need not
>>be
>> deleted
>>
>> There were other complaints but not many pros.  Unfortunately I failed
>>to
>> communicate these complaints to the core team prior to the vote, so now
>>is
>> the time for fixing that.
>>
>> I'll leave it open to the new folks that want to do the work if they
>>want to
>> work on an offshoot repository and open us up to the possible problems
>> above.
>>
>> If you are on this list:
>>
>> Ryan Hallisey
>> Britt Houser
>>
>> mark casey
>>
>> Steven Dake (delta-alpha-kilo-echo)
>>
>> Michael Schmidt
>>
>> Marian Schwarz
>>
>> Andrew Battye
>>
>> Kevin Fox (kfox)
>>
>> Sidharth Surana (ssurana)
>>
>>  Michal Rostecki (mrostecki)
>>
>>   Swapnil Kulkarni (coolsvap)
>>
>>   MD NADEEM (mail2nadeem92)
>>
>>   Vikram Hosakote (vhosakot)
>>
>>   Jeff Peeler (jpeeler)
>>
>>   Martin Andre (mandre)
>>
>>   Ian Main (Slower)
>>
>> Hui Kang (huikang)
>>
>> Serguei Bezverkhi (sbezverk)
>>
>> Alex Polvi (polvi)
>>
>> Rob Mason
>>
>> Alicja Kwasniewska
>>
>> sean mooney (sean-k-mooney)
>>
>> Keith Byrne (kbyrne)
>>
>> Zdenek Janda (xdeu)
>>
>> Brandon Jozsa (v1k0d3n)
>>
>> Rajath Agasthya (rajathagasthya)
>> Jinay Vora
>> Hui Kang
>> Davanum Srinivas
>>
>>
>>
>> Please speak up if you are in favor of a separate repository or a
>>unified
>> repository.
>>
>> The core reviewers will still take responsibility for determining if we
>> proceed on the action of implementing kubernetes in general.
>>
>> Thank you
>> -steve
>>
>>
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>I am in the favor of having two separate repos and evaluating the
>merge/split option later.
>Though in the longer run, I would recommend having a single repo with
>multiple stable deployment tools(maybe too early to comment views but
>yeah)
>
>Swapnil

Swapnil,

I gather this is what people want but this cannot be done with git and
maintain history.  To do this, we would have to "cp oldrepo/files to
newrepo/files" and the git history would be lost.  That is why choosing
two repositories up front is irreversible.

Regards
-steve

>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-05-02 Thread Cammann, Tom
Thanks for the write up Hongbin and thanks to all those who contributed to the 
design summit. A few comments on the summaries below.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as 
this is the only networking model Ironic currently supports. Multi-tenant 
networking is imminent. This should be done before work on an Ironic template 
starts.

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement 
over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

We decided only bay driver versioning was required. The template and template 
driver does not need versioning due to the fact we can get heat to pass back 
the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container 
monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This 
is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay 
cluster and we decided to postpone work as more investigation is required on 
what this should look like and what users actually need.
Container monitoring focuses on what containers are running in the bay and 
general usage of the bay COE. We decided this will be done completed by Magnum 
by adding access to cAdvisor/heapster through baking access to cAdvisor by 
default.

- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.

The idea revolves around creating a heat stack for each node in the bay. This 
idea shows a lot of promise but needs more investigation and isn’t a current 
priority.

Tom


From: Hongbin Lu 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 30 April 2016 at 05:05
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [magnum] Notes for Magnum design summit

Hi team,

For reference, below is a summary of the discussions/decisions in Austin design 
summit. Please feel free to point out if anything is incorrect or incomplete. 
Thanks.

1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
- Refactor existing code into bay drivers
- Each bay driver will be versioned
- Individual bay driver can have API extension and magnum CLI could load the 
extensions dynamically
- Work incrementally and support same API before and after the driver change

2. Bay lifecycle operations: 
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
- Support the following operations: reset the bay, rebuild the bay, rotate TLS 
certificates in the bay, adjust storage of the bay, scale the bay.

3. Scalability: https://etherpad.openstack.org/p/newton-magnum-scalability
- Implement Magnum plugin for Rally
- Implement the spec to address the scalability of deploying multiple bays 
concurrently: https://review.openstack.org/#/c/275003/

4. Container storage: 
https://etherpad.openstack.org/p/newton-magnum-container-storage
- Allow choice of storage driver
- Allow choice of data volume driver
- Work with Kuryr/Fuxi team to have data volume driver available in COEs 
upstream

5. Container network: 
https://etherpad.openstack.org/p/newton-magnum-container-network
- Discuss how to scope/pass/store OpenStack credential in bays (needed by Kuryr 
to communicate with Neutron).
- Several options were explored. No perfect solution was identified.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad 

Re: [openstack-dev] [Docs] Installation Guide Plans for Newton

2016-05-02 Thread Jim Rollenhagen
On Mon, May 02, 2016 at 05:12:34PM +0200, Andreas Jaeger wrote:
> On 05/02/2016 04:58 PM, Jim Rollenhagen wrote:
> >On Mon, May 02, 2016 at 03:52:17PM +0200, Andreas Jaeger wrote:
> >>On 05/02/2016 03:34 PM, Jim Rollenhagen wrote:
> >>>On Thu, Apr 28, 2016 at 10:38:13AM -0500, Lana Brindley wrote:
> Greetings from Austin!
> 
> Yesterday we held the Install Guide planning workgroup. It was a full 
> room, with some very robust discussion, and I want to thank everyone for 
> participating and sharing their thoughts on this matter. We have some 
> solid goals to achieve for Newton now.
> 
> The etherpad with the discussion is here: 
> https://etherpad.openstack.org/p/austin-docs-workgroup-install
> 
> The original spec to create a way for projects to publish Instal Guides 
> has now been merged, thanks to consensus being achieved at the Design 
> Summit session, and a few minor edits: 
> https://review.openstack.org/#/c/301284/ We can now begin work on this.
> 
> There is a new spec up to cover the remainder of the work to be done on 
> the existing Install Guide: https://review.openstack.org/#/c/310588 This 
> still needs some iterations to get it into shape (including determining 
> the name we are going to use for the Install Guide). So, patches welcome.
> 
> Again, thank you to all the many people who attended the session, and 
> those who have found me this week and discussed the future of the Install 
> Guide. I'm confident that this plan has broad acceptance, is achievable 
> during the Newton cycle, and it moves the documentation project forward.
> 
> Feedback is, as always, very welcome.
> >>>
> >>>One question on this: the spec says that projects may publish to
> >>>docs.openstack.org/project-install-guide/RELEASE/SERVICE/ (yay!).
> >>>
> >>>Some projects have intermediate releases during the cycle; for example
> >>>in Mitaka, ironic had 4.3.0, 5.0.0, and 5.1.0 releases, with 5.1.0 being
> >>>the base for stable/mitaka.
> >>>
> >>>Is it okay/expected that projects with this model publish an install
> >>>guide for each intermediate release as well? Again, using mitaka, ironic
> >>>would publish:
> >>>
> >>>docs.openstack.org/project-install-guide/4.3.0/ironic/
> >>>docs.openstack.org/project-install-guide/5.0.0/ironic/
> >>>docs.openstack.org/project-install-guide/5.1.0/ironic/
> >>>docs.openstack.org/project-install-guide/mitaka/ironic/
> >>>
> >>>Thanks for all the hard work on this! :)
> >>
> >>
> >>Let me ask this differently:
> >>
> >>Let's assume we have one index file for the mitaka version of all install
> >>guides which gets published at mitaka release time. Where should we link to?
> >>
> >>Does that help?
> >
> >Maybe? Clearly the index for Mitaka would point to the final mitaka
> >install guide (which would be published from stable/mitaka, and accept
> >backports). However, if we release a new feature in 4.3.0 for example,
> >and update our install guide, we should publish that with the release.
> >We wouldn't backport it to Liberty (because the feature didn't exist in
> >Liberty), but it's still half a cycle until the Mitaka guide is
> >published.
> >
> >FWIW, we currently do this for the /developer endpoint:
> >http://docs.openstack.org/developer/ironic/4.3.0/deploy/install-guide.html
> >
> >I'd really like to continue doing this. :)
> 
> So, if we publish *now* - let's say
> install-guide/liberty/index.html
> install-guide/mitaka/index.html
> install-guide/newton/index.html
> 
> Then, ironic would publish the 4.3.0 documentation and the 5.1 - and the 5.1
> would be referenced from the mitaka/index.html page. But if 4.3.0 is
> published, on which index page should it appear? Or is that just referenced
> from releases?

I think the latter, at least for now.

> 
> So, I'm fine with you continuing with publishing it - I'm more concerned on
> what will be in which index... Have extra versions is not a problem IMHO,

Gotcha, just wanted to make sure it would be fine. I'll think on the
index question a bit, it wasn't in the front of my mind. Thanks!

// jim

> 
> Andreas
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [glance] Interest in contributing to OpenStack

2016-05-02 Thread Nikhil Komawar
I guess this email answers a few questions I asked on my response email
to your interest email. Thanks for clarifying!

On 5/1/16 5:47 AM, Djimeli Konrad wrote:
> Hello,
>
> With respect to my previous mail. I would just like to add that I am
> aware the GSoC2016 and Outreachy2016 application periods are over and
> I have always been interested in making contribution to these
> projects, regardless of GSoC or Outreachy. I have also successfully
> gone through the OpenStack tutorial on how to submit a first patch and
> I hope to start working on a bug soon.
>
> On 1 May 2016 at 10:28, Djimeli Konrad  wrote:
>> Hello,
>>
>> My name is Djimeli Konrad a second year computer science student from
>> the University of Buea Cameroon, Africa. I am GSoC 2015 participant,
>> and I have also worked on  some open-source projects on github
>> (http://github.com/djkonro) and sourceforge
>> (https://sourceforge.net/u/konrado/profile/). I am very passionate
>> about cloud development, distributed systems, virtualization and I
>> would like to contribute to OpenStack.
>>
>> I have to gone through the OpenStack GSoC2016/Outreachy2016 projects
>> (https://wiki.openstack.org/wiki/Internship_ideas ) and I am
>> interested the in working on "Glance - Extended support for requests
>> library" and "Glance - Develop a python based GLARE (GLance Artifacts
>> REpository) client library and shell API". I would like to get some
>> detail regarding the projects. Is this a priority project?, what is
>> the expected outcome? and what are some starting points?.
>>
>> I am proficient with C, C++ and Python, and I have successfully build
>> and setup OpenStack, using devstack.
>>
>> Thanks
>> Konrad
>> https://www.linkedin.com/in/konrad-djimeli-69809b97

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] update_allowed vs. immutable

2016-05-02 Thread Praveen Yalagandula
What is the difference between "update_allowed" and "immutable" parameters
for a property? According to the plugin guide at
http://docs.openstack.org/developer/heat/developing_guides/pluginguide.html:

update_allowed:
True if an existing resource can be updated, False means update is
accomplished by delete and re-create. Default is False.

immutable:
True means updates are not supported, resource update will fail on every
change of this property. False otherwise. Default is False.

Since any resource can be deleted and then re-created, it seems
"update_allowed" is the right parameter to define. Why do we need
"immutable"?

Thanks,
Praveen Yalagandula
Avi Networks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-02 Thread Mauricio Lima
Hello guys,

Reading the conversation and analyzing a few trouble that happened with
kolla-mesos and nothing can guarantee that will not happen the same with
the K8S, as well as ensuring that the history will not lost. So, don't
split the repo.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Grenade testing work

2016-05-02 Thread Villalovos, John L
During the OpenStack Summit it was decided that getting Grenade to work for 
Ironic would be the highest priority for the Ironic project. Because without 
Grenade, we won't have upgrade testing and we need upgrade testing for features 
we want to land to ensure we don't break anything.

We will be discussing this work in the next Ironic QA/Test meeting on 
4-May-2016 Wednesday at 1700 UTC in #openstack-meeting
https://wiki.openstack.org/wiki/Meetings/Ironic-QA

We will be using the audio conferencing system that OpenStack provides:
https://wiki.openstack.org/wiki/Infrastructure/Conferencing

Please use Conference 6555

I have been working on this for a while and wanted to share information for 
people who wanted to try to work on it locally.

I have created a GitHub project:
https://github.com/JohnVillalovos/devstack-gate-test

This is designed to use Vagrant to bring up a VM to run an environment similar 
to the gate. But it can also be used without Vagrant, though the documentation 
is a bit sparse on that.

During the call I can help people who are interested in setting it up locally 
and go over the progress.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Jeff Peeler
On Sun, May 1, 2016 at 5:03 PM, Steven Dake (stdake)  wrote:
> I don't think a separate repository is the correct approach based upon one
> off private conversations with folks at summit. Many people from that list
> approached me and indicated they would like to see the work integrated in
> one repository as outlined in my vote proposal email.  The reasons I heard
> were:
>
> Better integration of the community
> Better integration of the code base
> Doesn't present an us vs them mentality that one could argue happened during
> kolla-mesos
> A second repository makes k8s a second class citizen deployment architecture
> without a voice in the full deployment methodology
> Two gating methods versus one
> No going back to a unified repository while preserving git history
>
> I favor of the separate repositories I heard
>
> It presents a unified workspace for kubernetes alone
> Packaging without ansible is simpler as the ansible directory need not be
> deleted
>
> There were other complaints but not many pros.  Unfortunately I failed to
> communicate these complaints to the core team prior to the vote, so now is
> the time for fixing that.

I favor the repository split, but the reason being is that I think
Ansible along with Kubernetes should each be a separate repository.
Keeping a monolithic repository is the opposite of the "Unix
philosophy". It was even recommended at one point to split every
single service into a separate repository [1].

Repository management, backports, and additional gating are all things
that I'll admit are more work with more than one single repository.
However, the ease of ramping up where everything is separated out
makes it worth it in my opinion. I believe the success of a given
community is partially due to proper delineation of expertise
(otherwise, why not put all OpenStack services in one gigantic repo?).
I'm echoing this comment somebody said at the summit: stretching the
core team across every orchestration tool is not scalable. I'm really
hoping more projects will grow around the Kolla ecosystem and can do
so without being required to become proficient with every other
orchestration system.

One argument for keeping a single repository is to compare to the
mesos effort (that has stopped now) in a different repository. But as
it has already been said, mesos should have been given fairness with
ansible split out as well. If everything were in a single repository,
it has been suggested that the community will review more. However, I
don't personally believe that with gerrit in use that affects things
at all. OpenStack even has a gerrit dashboard creator [2], but I think
developers are capable enough at easily finding what they want to
consistently review.

As I said in a previous reply [3], I don't think git history should
affect this decision as we can make it work in either scenario. ACL
permissions seem overly complicated to be in the same repository, even
if we can arrange for a feature branch to have different permissions
from the main repo.

My views here are definitely focused on the long term view. If any
short term plans can be made to allow ourselves to eventually align
with having separate repositories, I don't think I'd have a problem
with that. However, I thought the Ansible code was supposed to have
been separated out a long time ago. This is a natural inflection point
to change policy and mode of operating, which is why I don't enjoy the
idea of waiting any longer. Luckily, having Ansible in the same
repository currently does not inhibit any momentum with Kubernetes in
a separate repository.

As far as starting the repositories split and then merging them in the
future (assuming Ansible also stays in one repo), I don't know why we
would want that. But perhaps after the Kubernetes effort has
progressed we can better determine if that makes sense with a clear
view of what the project files actually end up looking like. I don't
think that any project that changes the containers' ABI is suitable to
be labeled as "Kolla", so there wouldn't be any dockerfiles part of
the repository.

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-April/093213.html
[2] https://github.com/openstack/gerrit-dash-creator
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/093645.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Ryan Hallisey
Most of the code is not an overlap. We will preserve the ABI while customizing 
the ansible config generation (if we do end up using it). We can use some of 
what's in kolla as a starting point.

I'd say the code overlap is a bootstrapping point for the project.

-Ryan

- Original Message -
From: "Kevin M Fox" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Monday, May 2, 2016 12:56:22 PM
Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two

One thing we didn't talk about too much at the summit is the part of the spec 
that says we will reuse a bunch of ansible stuff to generate configs for the 
k8s case...

Do we believe that code would be minimal and not impact separate repo's much or 
is the majority of the work in the end going to be focused there? If most of 
the code ends up landing there, then its probably not worth splitting?

Thanks,
Kevin

From: Steven Dake (stdake) [std...@cisco.com]
Sent: Monday, May 02, 2016 6:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two

On 5/1/16, 10:32 PM, "Swapnil Kulkarni"  wrote:

>On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
> wrote:
>> Although it seems I'm in the minority, I am in favor of unified repo.
>>
>> From: "Steven Dake (stdake)" 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Sunday, May 1, 2016 at 5:03 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>> Ryan had rightly pointed out that when we made the original proposal 9am
>> morning we had asked folks if they wanted to participate in a separate
>> repository.
>>
>> I don't think a separate repository is the correct approach based upon
>>one
>> off private conversations with folks at summit.  Many people from that
>>list
>> approached me and indicated they would like to see the work integrated
>>in
>> one repository as outlined in my vote proposal email.  The reasons I
>>heard
>> were:
>>
>> Better integration of the community
>> Better integration of the code base
>> Doesn't present an us vs them mentality that one could argue happened
>>during
>> kolla-mesos
>> A second repository makes k8s a second class citizen deployment
>>architecture
>> without a voice in the full deployment methodology
>> Two gating methods versus one
>> No going back to a unified repository while preserving git history
>>
>> I favor of the separate repositories I heard
>>
>> It presents a unified workspace for kubernetes alone
>> Packaging without ansible is simpler as the ansible directory need not
>>be
>> deleted
>>
>> There were other complaints but not many pros.  Unfortunately I failed
>>to
>> communicate these complaints to the core team prior to the vote, so now
>>is
>> the time for fixing that.
>>
>> I'll leave it open to the new folks that want to do the work if they
>>want to
>> work on an offshoot repository and open us up to the possible problems
>> above.
>>
>> If you are on this list:
>>
>> Ryan Hallisey
>> Britt Houser
>>
>> mark casey
>>
>> Steven Dake (delta-alpha-kilo-echo)
>>
>> Michael Schmidt
>>
>> Marian Schwarz
>>
>> Andrew Battye
>>
>> Kevin Fox (kfox)
>>
>> Sidharth Surana (ssurana)
>>
>>  Michal Rostecki (mrostecki)
>>
>>   Swapnil Kulkarni (coolsvap)
>>
>>   MD NADEEM (mail2nadeem92)
>>
>>   Vikram Hosakote (vhosakot)
>>
>>   Jeff Peeler (jpeeler)
>>
>>   Martin Andre (mandre)
>>
>>   Ian Main (Slower)
>>
>> Hui Kang (huikang)
>>
>> Serguei Bezverkhi (sbezverk)
>>
>> Alex Polvi (polvi)
>>
>> Rob Mason
>>
>> Alicja Kwasniewska
>>
>> sean mooney (sean-k-mooney)
>>
>> Keith Byrne (kbyrne)
>>
>> Zdenek Janda (xdeu)
>>
>> Brandon Jozsa (v1k0d3n)
>>
>> Rajath Agasthya (rajathagasthya)
>> Jinay Vora
>> Hui Kang
>> Davanum Srinivas
>>
>>
>>
>> Please speak up if you are in favor of a separate repository or a
>>unified
>> repository.
>>
>> The core reviewers will still take responsibility for determining if we
>> proceed on the action of implementing kubernetes in general.
>>
>> Thank you
>> -steve
>>
>>
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>I am in the favor of having two separate repos and evaluating the
>merge/split option later.
>Though in the longer run, I would recommend having a single repo with
>multiple stable deployment tools(maybe too early to comment views but
>yeah)
>
>Swapnil

Swapnil,

I gather this is what people want but this cannot be 

Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-05-02 Thread Armando M.
On 30 April 2016 at 15:42, Doug Wiegley 
wrote:

>
> On Apr 30, 2016, at 1:24 PM, Fawad Khaliq  wrote:
>
> Hi folks,
>
> Hope everyone had a great summit in Austin and got back safe! :)
>
> At the design summit, we had a Neutron stadium evolution session, which
> needs your immediate attention as it will impact many stakeholders of
> Neutron.
>
> To summarize for everyone, our Neutron leadership made the following
> proposal for the “greater-good” of Neutron to improve and reduce burden on
> the Neutron PTL and core team to avoid managing more Neutron drivers:
>
> Quoting the etherpad [1]
>
> "No request for inclusion are accepted for projects focussed solely on
> implementations and/or API extensions to non-open solutions.”
>
>
> Let’s be clear about official openstack projects versus in the ecosystem
> versus whatever, which is defined by the TC, not neutron:
> https://governance.openstack.org/reference/new-projects-requirements.html
>
>
> To summarize for everyone what this means is that all Neutron drivers,
> which implement non open source networking backends are instantly out of
> the Neutron stadium and are marked as "unofficial/unsupported/remotely
> affiliated" and rest are capable of being tagged as "supported/official”.
>
>
> So, before we throw around statements like “supported” vs “unsupported”,
> let’s take a look at what the stadium governance change really entails:
>
> - The neutron core team won’t review/merge/maintain patches for your
> plugin/driver. In many cases, this was already true.
> - The neutron release team won’t handle tagging your releases. In many
> cases, this was already true.
> - The neutron PTL is no longer involved in your repository’s governance.
> In many cases, this was effectively already true.
>
> It doesn’t mean it isn’t a valid project that supports neutron interfaces.
>
> In or out of the stadium, all plugins have these challenges:
>
> - If you’re not using a stable interface, you’ll break a lot.
> - If you are using a stable interface, you’ll still break some (standard
> rot).
> - Vendors will need to support and test their own code.
>
> Every time this comes up, people get upset that neutron is closing its
> doors, or somehow invalidating all the existing plugins. Let’s review:
>
> - The neutron api and plugin interfaces are not going away.
> - There is ongoing work to libify more interfaces, for the sake of
> plugins/drivers.
> - There is a strong push for more documentation to make integrating better.
> - Non-stadium projects still have access to their infra repos and CI
> resources.
>
> Armando’s proposal was about recognizing reality, not some huge change in
> how things actually work. What is the point of having a project governed by
> Neutron that isn’t doing anything but consuming neutron interfaces, and is
> otherwise uninvolved? How can you expect neutron to vouch for those? What
> is your proposal?
>
> Thanks,
> doug
>

++

Thanks Doug, this is very well elaborated. I am gonna steal that in my spec
:)


>
>
>
> This eliminates all commercial Neutron drivers developed for many service
> providers and enterprises who have deployed OpenStack successfully with
> these drivers. It’s unclear how the OpenStack Foundation will communicate
> its stance with all the users but clearly this is a huge set back for
> OpenStack and Neutron. Neutron will essentially become closed to all
> existing, non-open drivers, even if these drivers have been compliant with
> Neutron API for years and users have them deployed in production, forcing
> users to re-evaluate their options.
>
> Furthermore, this proposal will erode confidence in Neutron and OpenStack,
> and destroy much of the value that the community has worked so hard to
> build over the years.
>
> As a representative and member of the OpenStack community and maintainer
> of a Neutron driver (since Grizzly), I am deeply disappointed and disagree
> with this statement [2]. Tossing out all the non-open solutions is not in
> the best interest of the end user companies that have built working
> OpenStack clusters. This proposal will lead OpenStack end users who
> deployed different drivers to think twice about OpenStack communities’
> commitment to deliver solutions they need. Furthermore, this proposal
> punishes OpenStack companies who developed commercial backend drivers to
> help end users bring up OpenStack clouds.
>
> Also, we have to realize that this proposal divides the community rather
> than unifying it. If it proceeds, it seems all OpenStack projects should
> follow for consistency. For example, this should apply to Nova which means
> HyperV and vShphere can't be part of Nova, PLUMgrid can't be part of Kuryr,
> and ABC company cannot have a driver/plugin for a XYZ project.
>
> Another thing to note is, for operators, the benefit is that the
> flexibility up until now has allowed them to embark on successful OpenStack
> deployments. For those operators, 

Re: [openstack-dev] [nova] Distributed Database

2016-05-02 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2016-05-02 08:51:58 -0700:
> 
> Well IMO that's actually often a problem.  My goal across Openstack 
> projects in general is to allow them to make use of SQL more effectively 
> than they do right now; for example, in Neutron I am helping them to 
> move a block of code that inefficiently needs to load a block of data 
> into memory, scan it for CIDR overlaps, and then push data back out. 
> This approach prevents it from performing a single UPDATE statement and 
> ushers in the need for pessimistic locking against concurrent 
> transactions.  Instead, I've written for them a simple stored function 
> proof-of-concept [2] that will allow the entire operation to be 
> performed on the database side alone in a single statement.  Wins like 
> these are much less feasible if not impossible when a project decides it 
> wants to split its backend store between dramatically different 
> databases which don't offer such features.
> 

FWIW, I agree with you. If you're going to use SQLAlchemy, use it to
take advantage of the relational model.

However, how is what you describe a win? Whether you use SELECT .. FOR
UPDATE, or a stored procedure, the lock is not distributed, and thus, will
still suffer rollback failures in Galera. For single DB server setups, you
don't have to worry about that, and SELECT .. FOR UPDATE will work fine.

So to me, this is something where you need a distributed locking system
(ala ZooKeeper) to actually solve the problem for multiple database
servers.

Furthermore, any logic that happens inside the database server is extra
load on a much much much harder resource to scale, using code that is
much more complicated to update. For those reasons I'm generally opposed
to using any kind of stored procedures in large scale systems. It's the
same reason I dislike foreign key enforcement: you're expending a limited
resource to mitigate a problem which _can_ be controlled and addressed
with non-stateful resources that are easier and simpler to scale.

> >
> > Concretely, we think that there are three possible approaches:
> >  1) We can use the SQLAlchemy API as the common denominator between a 
> > relational and non-relational implementation of the db.api component. These 
> > two implementation could continue to converge by sharing a large amount of 
> > code.
> >  2) We create a new non-relational implementation (from scratch) of the 
> > db.api component. It would require probably more work.
> >  3) We are also studying a last alternative: writing a SQLAlchemy 
> > engine that targets NewSQL databases (scalability + ACID):
> >   - https://github.com/cockroachdb/cockroach
> >   - https://github.com/pingcap/tidb
> 
> Going with a NewSQL backend is by far the best approach here.   That 
> way, very little needs to be reinvented and the application's approach 
> to data doesn't need to dramatically change.
> 
> But also, w.r.t. Cells there seems to be some remaining debate over why 
> exactly a distributed approach is even needed.  As others have posted, a 
> single MySQL database, replicated across Galera or not, scales just fine 
> for far more data than Nova ever needs to store.  So it's not clear why 
> the need for a dramatic rewrite of its datastore is called for.
> 

To be clear, it's not the amount of data, but the size of the failure
domain. We're more worried about what will happen to those 40,000 open
connections from our 4000 servers when we do have to violently move them.

That particular problem isn't as scary if you have a large
Cassandra/MongoDB/Riak/ROME cluster, as the client libraries are
generally connecting to all or most of the nodes already, and will
simply use a different connection if the initial one fails. However,
these other systems also bring a whole host of new problems which the
simpler SQL approach doesn't have.

So it's worth doing an actual analysis of the failure handling before
jumping to the conclusion that a pile of cells/sharding code or a rewrite
to use a distributed database would be of benefit.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-05-02 Thread Kyle Mestery
On Mon, May 2, 2016 at 12:22 PM, Armando M.  wrote:
>
>
> On 30 April 2016 at 14:24, Fawad Khaliq  wrote:
>>
>> Hi folks,
>>
>> Hope everyone had a great summit in Austin and got back safe! :)
>>
>> At the design summit, we had a Neutron stadium evolution session, which
>> needs your immediate attention as it will impact many stakeholders of
>> Neutron.
>
>
> It's my intention to follow up with a formal spec submission to
> neutron-specs as soon as I recover from the trip. Then you'll have a more
> transparent place to voice your concern.
>
>>
>>
>> To summarize for everyone, our Neutron leadership made the following
>> proposal for the “greater-good” of Neutron to improve and reduce burden on
>> the Neutron PTL and core team to avoid managing more Neutron drivers:
>
>
> It's not just about burden. It's about consistency first and foremost.
>
>>
>>
>> Quoting the etherpad [1]
>>
>> "No request for inclusion are accepted for projects focussed solely on
>> implementations and/or API extensions to non-open solutions."
>
>
> By the way, this was brought forward and discussed way before the Summit. In
> fact this is already implemented at the Neutron governance level [1].
>
>>
>> To summarize for everyone what this means is that all Neutron drivers,
>> which implement non open source networking backends are instantly out of the
>> Neutron stadium and are marked as "unofficial/unsupported/remotely
>> affiliated" and rest are capable of being tagged as "supported/official”.
>
>
> Totally false.
>
> All this means is that these projects do not show up in list [1] (minus [2],
> which I forgot): ie. these projects are the projects the Neutron team
> vouches for. Supportability is not a property tracked by this list. You,
> amongst many, should know that it takes a lot more than being part of a list
> to be considered a supported solution, and I am actually even surprised that
> you are misled/misleading by bringing 'support' into this conversation.
>
> [1] http://governance.openstack.org/reference/projects/neutron.html
> [2] https://review.openstack.org/#/c/309618/
>
>>
>>
>> This eliminates all commercial Neutron drivers developed for many service
>> providers and enterprises who have deployed OpenStack successfully with
>> these drivers. It’s unclear how the OpenStack Foundation will communicate
>> its stance with all the users but clearly this is a huge set back for
>> OpenStack and Neutron. Neutron will essentially become closed to all
>> existing, non-open drivers, even if these drivers have been compliant with
>> Neutron API for years and users have them deployed in production, forcing
>> users to re-evaluate their options.
>
>
> Again, totally false.
>
> The Neutron team will continue to stand behind the APIs and integration
> mechanisms in a way that made the journey of breaking down the codebase as
> we know it today possible. Any discussion of evolving these has been done
> and will be done in the open and with the support of all parties involved,
> non-open solutions included.
>
>>
>>
>> Furthermore, this proposal will erode confidence in Neutron and OpenStack,
>> and destroy much of the value that the community has worked so hard to build
>> over the years.
>>
>>
>> As a representative and member of the OpenStack community and maintainer
>> of a Neutron driver (since Grizzly), I am deeply disappointed and disagree
>> with this statement [2]. Tossing out all the non-open solutions is not in
>> the best interest of the end user companies that have built working
>> OpenStack clusters. This proposal will lead OpenStack end users who deployed
>> different drivers to think twice about OpenStack communities’ commitment to
>> deliver solutions they need. Furthermore, this proposal punishes OpenStack
>> companies who developed commercial backend drivers to help end users bring
>> up OpenStack clouds.
>
>
> What? Now you're just spreading FUD.
>
> What is being discussed in that etherpad is totally in line with [1], which
> you approved and stood behind, by the way! No-one is breaking anything,
> we're simply better reflecting what initiatives the Neutron core team is
> supposed to be accountable for and, as a result, empower the individual core
> teams of those vendor drivers. I appreciate there might be a gap in where to
> describe the effort of these initiatives in [2], but I believe there's
> something like the marketplace [3] that's better suited for what you're
> after. IMO, [2] was never intended to be that place, and I stand corrected
> if not.
>
> [1] https://review.openstack.org/#/c/309618/
> [2] http://governance.openstack.org/
> [3] https://www.openstack.org/marketplace/drivers/
>
To further support Armando here, I agree that the marketplace is the
best place to host these drivers. In fact, Thierry and I briefly
discussed this, and I think advocating for the Foundation to help put
in place more of a specific drivers program and manage it makes a lot
of sense, 

Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-02 Thread Mauricio Lima
Just to clarify my vote.

+1 for single repository

2016-05-02 14:11 GMT-03:00 Jeff Peeler :

> Also +1 for working on kolla-kubernetes. (Please read this thread if
> you haven't yet):
>
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093575.html
>
> On Mon, May 2, 2016 at 10:56 AM, Ryan Hallisey 
> wrote:
> > +1 to start kolla-kubernetes work.
> >
> > - Original Message -
> > From: "Swapnil Kulkarni" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Sent: Monday, May 2, 2016 12:59:40 AM
> > Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
> kolla-kubernetes repository management proposal up for vote
> >
> > On Mon, May 2, 2016 at 10:08 AM, Vikram Hosakote (vhosakot)
> >  wrote:
> >> A separate repo will land us in the same spot as we had with kolla-mesos
> >> originally.  We had all kinds of variance in the implementation.
> >>
> >> I’m in favor of a single repo.
> >>
> >> +1 for the single repo.
> >>
> >
> > I agree with you Vikram, but we should consider the bootstrapping
> > requirements for new deployment technologies and learn from our
> > failures with kolla-mesos.
> >
> > At the same time, it will help us evaluate the deployment technologies
> > going ahead without distrupting the kolla repo which we can treat as a
> > repo with stable images & associated deployment tools.
> >
> >> Regards,
> >> Vikram Hosakote
> >> IRC: vhosakot
> >>
> >> From: Vikram Hosakote 
> >> Date: Sunday, May 1, 2016 at 11:36 PM
> >>
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
> >> kolla-kubernetes repository management proposal up for vote
> >>
> >> Please add me too to the list!
> >>
> >> Regards,
> >> Vikram Hosakote
> >> IRC: vhosakot
> >>
> >> From: Michał Jastrzębski 
> >> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Date: Saturday, April 30, 2016 at 9:58 AM
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
> >> kolla-kubernetes repository management proposal up for vote
> >>
> >> Add me too please Steven.
> >>
> >> On 30 April 2016 at 09:50, Steven Dake (stdake) 
> wrote:
> >>
> >> Fellow core reviewers,
> >>
> >> We had a fantastic turnout at our fishbowl kubernetes as an underlay for
> >> Kolla session.  The etherpad documents the folks interested and
> discussion
> >> at summit[1].
> >>
> >> This proposal is mostly based upon a combination of several discussions
> at
> >> open design meetings coupled with the kubernetes underlay discussion.
> >>
> >> The proposal (and what we are voting on) is as follows:
> >>
> >> Folks in the following list will be added to a kolla-k8s-core group.
> >>
> >>   This kolla-k8s-core group will be responsible for code reviews and
> code
> >> submissions to the kolla repository for the /kubernetes top level
> directory.
> >> Individuals in kolla-k8s-core that consistently approve (+2) or
> disapprove
> >> with a (-2) votes to TLD directories other then kubernetes will be
> handled
> >> on a case by case basis with several "training warnings" followed by
> removal
> >> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as
> a
> >> subgroup of the kolla-core reviewer team, which means they in effect
> have
> >> all of the ACL access as the existing kolla repository.  I think it is
> >> better in this case to trust these individuals to the right thing and
> only
> >> approve changes for the kubernetes directory until proposed for the
> >> kolla-core reviewer group where they can gate changes to any part of the
> >> repository.
> >>
> >> Britt Houser
> >>
> >> mark casey
> >>
> >> Steven Dake (delta-alpha-kilo-echo)
> >>
> >> Michael Schmidt
> >>
> >> Marian Schwarz
> >>
> >> Andrew Battye
> >>
> >> Kevin Fox (kfox)
> >>
> >> Sidharth Surana (ssurana)
> >>
> >>   Michal Rostecki (mrostecki)
> >>
> >>Swapnil Kulkarni (coolsvap)
> >>
> >>MD NADEEM (mail2nadeem92)
> >>
> >>Vikram Hosakote (vhosakot)
> >>
> >>Jeff Peeler (jpeeler)
> >>
> >>Martin Andre (mandre)
> >>
> >>Ian Main (Slower)
> >>
> >> Hui Kang (huikang)
> >>
> >> Serguei Bezverkhi (sbezverk)
> >>
> >> Alex Polvi (polvi)
> >>
> >> Rob Mason
> >>
> >> Alicja Kwasniewska
> >>
> >> sean mooney (sean-k-mooney)
> >>
> >> Keith Byrne (kbyrne)
> >>
> >> Zdenek Janda (xdeu)
> >>
> >> Brandon Jozsa (v1k0d3n)
> >>
> >> Rajath Agasthya (rajathagasthya)
> >>
> >>
> >> If you already are in the kolla-core review team, you won't be added to
> the
> >> kolla-k8s-core team as you will already have the necessary ACLs to do
> the
> >> 

Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Steven Dake (stdake)
I personally would like to see one set of defaults files for the default
config and merging thereof. (the stuff in roles/*/defaults).

There would be overlap there.

A lot of the overlap involves things like reno, sphinx, documentation,
gating, etc.

During kolla-emsos, separate containers (IIRC) were made, separate start
extension scripts were made, and to my dismay a completely different ABI
was implemented.

We need one ABI to the containers and that should be laid out in the spec
if it isn't already.

Regards
-steve


On 5/2/16, 10:31 AM, "Ryan Hallisey"  wrote:

>Most of the code is not an overlap. We will preserve the ABI while
>customizing the ansible config generation (if we do end up using it). We
>can use some of what's in kolla as a starting point.
>
>I'd say the code overlap is a bootstrapping point for the project.
>
>-Ryan
>
>- Original Message -
>From: "Kevin M Fox" 
>To: "OpenStack Development Mailing List (not for usage questions)"
>
>Sent: Monday, May 2, 2016 12:56:22 PM
>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>
>One thing we didn't talk about too much at the summit is the part of the
>spec that says we will reuse a bunch of ansible stuff to generate configs
>for the k8s case...
>
>Do we believe that code would be minimal and not impact separate repo's
>much or is the majority of the work in the end going to be focused there?
>If most of the code ends up landing there, then its probably not worth
>splitting?
>
>Thanks,
>Kevin
>
>From: Steven Dake (stdake) [std...@cisco.com]
>Sent: Monday, May 02, 2016 6:05 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>
>On 5/1/16, 10:32 PM, "Swapnil Kulkarni"  wrote:
>
>>On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
>> wrote:
>>> Although it seems I'm in the minority, I am in favor of unified repo.
>>>
>>> From: "Steven Dake (stdake)" 
>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>>questions)"
>>> 
>>> Date: Sunday, May 1, 2016 at 5:03 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>>>
>>> Ryan had rightly pointed out that when we made the original proposal
>>>9am
>>> morning we had asked folks if they wanted to participate in a separate
>>> repository.
>>>
>>> I don't think a separate repository is the correct approach based upon
>>>one
>>> off private conversations with folks at summit.  Many people from that
>>>list
>>> approached me and indicated they would like to see the work integrated
>>>in
>>> one repository as outlined in my vote proposal email.  The reasons I
>>>heard
>>> were:
>>>
>>> Better integration of the community
>>> Better integration of the code base
>>> Doesn't present an us vs them mentality that one could argue happened
>>>during
>>> kolla-mesos
>>> A second repository makes k8s a second class citizen deployment
>>>architecture
>>> without a voice in the full deployment methodology
>>> Two gating methods versus one
>>> No going back to a unified repository while preserving git history
>>>
>>> I favor of the separate repositories I heard
>>>
>>> It presents a unified workspace for kubernetes alone
>>> Packaging without ansible is simpler as the ansible directory need not
>>>be
>>> deleted
>>>
>>> There were other complaints but not many pros.  Unfortunately I failed
>>>to
>>> communicate these complaints to the core team prior to the vote, so now
>>>is
>>> the time for fixing that.
>>>
>>> I'll leave it open to the new folks that want to do the work if they
>>>want to
>>> work on an offshoot repository and open us up to the possible problems
>>> above.
>>>
>>> If you are on this list:
>>>
>>> Ryan Hallisey
>>> Britt Houser
>>>
>>> mark casey
>>>
>>> Steven Dake (delta-alpha-kilo-echo)
>>>
>>> Michael Schmidt
>>>
>>> Marian Schwarz
>>>
>>> Andrew Battye
>>>
>>> Kevin Fox (kfox)
>>>
>>> Sidharth Surana (ssurana)
>>>
>>>  Michal Rostecki (mrostecki)
>>>
>>>   Swapnil Kulkarni (coolsvap)
>>>
>>>   MD NADEEM (mail2nadeem92)
>>>
>>>   Vikram Hosakote (vhosakot)
>>>
>>>   Jeff Peeler (jpeeler)
>>>
>>>   Martin Andre (mandre)
>>>
>>>   Ian Main (Slower)
>>>
>>> Hui Kang (huikang)
>>>
>>> Serguei Bezverkhi (sbezverk)
>>>
>>> Alex Polvi (polvi)
>>>
>>> Rob Mason
>>>
>>> Alicja Kwasniewska
>>>
>>> sean mooney (sean-k-mooney)
>>>
>>> Keith Byrne (kbyrne)
>>>
>>> Zdenek Janda (xdeu)
>>>
>>> Brandon Jozsa (v1k0d3n)
>>>
>>> Rajath Agasthya (rajathagasthya)
>>> Jinay Vora
>>> Hui Kang
>>> Davanum Srinivas
>>>
>>>
>>>
>>> Please speak up if you are in favor of a separate repository or a
>>>unified
>>> repository.
>>>
>>> The core reviewers will still take 

[openstack-dev] [kuryr] Question for Antoni Puimedon about load balancers and overlay networks

2016-05-02 Thread Mike Spreitzer
I am looking at 
https://www.openstack.org/videos/video/project-kuryr-docker-delivered-kubernetes-next
 
, around 28:00.  You have said that overlay networks are involved, and are 
now talking about load balancers.  Is this Neutron LBaaS?  As far as I 
know, a Neutron LBaaS instance is "one-armed" --- both the VIP and the 
back endpoints have to be on the same Neutron network.  But you seem to 
have all the k8s services on a single subnet.  So I am having some trouble 
following exactly what is going on.  Can you please elaborate?

BTW, there is also some discussion of k8s multi-tenancy in the Kubernetes 
Networking SIG and the Kubernetes OpenStack SIG.

Thanks,
Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-05-02 Thread Adam Young

On 04/26/2016 08:28 AM, Guangyu Suo wrote:

Hello, oslo team

For now, some sensitive options like password or token are configured 
as plaintext, anyone who has the priviledge to read the configure file 
can get the real password, this may be a security problem that can't 
be unacceptable for some people.


So the first solution comes to my mind is to encrypt these options 
when configuring them and decrypt them when reading them in 
oslo.config. This is a bit like apache/openldap did, but the 
difference is these softwares do a salt hash to the password, this is 
a one-way encryption that can't be decrypted, these softwares can 
recognize the hashed value. But if we do this work in oslo.config, for 
example the admin_password in keystone_middleware section, we must 
feed the keystone with the plaintext password which will be hashed in 
keystone to compare with the stored hashed password, thus the encryped 
value in oslo.config must be decryped to plaintext. So we should 
encrypt these options using symmetrical or unsymmetrical method with a 
key, and put the key in a well secured place, and decrypt them using 
the same key when reading them.


Of course, this feature should be default closed. Any ideas?


PKI.  Each service gets a client certificate that they use, signed by a 
selfsigned CA on the controller node, and uses the Tokenless/X509 
Mapping in Keystone to identify itself.


Do not try to build a crypto system around passwords.  None of us are 
qualified to do that.


We should be able to kill explicit service users and use X509 any way.

Kerberos would work, too, for deployments that prefer that form of 
Authentication.  We can document this, but do not need to implement.


Certmonger can manage the certificates for us.

Anchor can act as the CA for deployments that want something more than 
selfsigned, but don't want to go with a full CA.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-05-02 Thread Morgan Fainberg
On Mon, May 2, 2016 at 11:32 AM, Adam Young  wrote:

> On 04/26/2016 08:28 AM, Guangyu Suo wrote:
>
> Hello, oslo team
>
> For now, some sensitive options like password or token are configured as
> plaintext, anyone who has the priviledge to read the configure file can get
> the real password, this may be a security problem that can't be
> unacceptable for some people.
>
> So the first solution comes to my mind is to encrypt these options when
> configuring them and decrypt them when reading them in oslo.config. This is
> a bit like apache/openldap did, but the difference is these softwares do a
> salt hash to the password, this is a one-way encryption that can't be
> decrypted, these softwares can recognize the hashed value. But if we do
> this work in oslo.config, for example the admin_password in
> keystone_middleware section, we must feed the keystone with the plaintext
> password which will be hashed in keystone to compare with the stored hashed
> password, thus the encryped value in oslo.config must be decryped to
> plaintext. So we should encrypt these options using symmetrical or
> unsymmetrical method with a key, and put the key in a well secured place,
> and decrypt them using the same key when reading them.
>
> Of course, this feature should be default closed. Any ideas?
>
>
> PKI.  Each service gets a client certificate that they use, signed by a
> selfsigned CA on the controller node, and uses the Tokenless/X509 Mapping
> in Keystone to identify itself.
>
> Do not try to build a crypto system around passwords.  None of us are
> qualified to do that.
>
>
++ Rule 1 of crypto - don't roll your own system.


> We should be able to kill explicit service users and use X509 any way.
>
>
++ We should be moving towards token-less auth for service users where
possible. This doesn't mitigate the need to cert/key manage properly (NSS?
devops-y things, etc)


> Kerberos would work, too, for deployments that prefer that form of
> Authentication.  We can document this, but do not need to implement.
>
>
Never hurts to have alternatives.


> Certmonger can manage the certificates for us.
>
> Anchor can act as the CA for deployments that want something more than
> selfsigned, but don't want to go with a full CA.
>
>
I'd be in favor of anchor being the default (devstack? gate? "best
practices") choice for 'service users' over the full CA, but in either case
as long as we have an easy-to-use-low-barrier-to-entry-well-formed-system,
we're on the right path.


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Could you tell me the best way to develop openstack in Windows?

2016-05-02 Thread Perry, Sean
Agreed. A VM running Linux is the only way to really develop for OpenStack.

One could write the code on Windows and share it into the Linux VM to then be 
run via tox or another test tool. Both VirtualBox and VMWare support sharing a 
folder from your Windows host into the Linux guest.
This way you have the comfort of the Windows tools for writing software but the 
Linux environment for running it and debugging it. Yes, debug it in the Linux 
environment.


From: Sam Matzek [matzek...@gmail.com]
Sent: Monday, May 02, 2016 5:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Nova] Could you tell me the best way to develop 
openstack in Windows?

For reading the code I use Eclipse Mars with PyDev installed on
Windows 7.  This allows me to CTRL-click on methods calls to link to
their source, etc.  I also use this for some amount of code
modification and can run some amount of unit tests after pip
installing a lot of dependencies.  This is more of a convenience
method though.

For real development where you will contribute back I suggest running
Linux VM on Windows where you can then use command line git and tox.

On Mon, May 2, 2016 at 6:53 AM, Anita Kuno  wrote:
> On 05/02/2016 04:20 AM, 박준하 wrote:
>> Hi forks,
>>
>>
>>
>> I’m beginner about Openstack developments. I’ve been really interested in
>> to know about Openstack and wanted to modify and test it.
>>
>>
>>
>> But I’m using PyCharm in the Windows 7 but, It’s very hard to programming
>> and testing when I tried to change code little bit.
>>
>>
>>
>> I hope you guys are the best developer about opensources especially
>> Openstack. Could tell me your environment for developing on the Windows?
>>
>>
>>
>>
>>
>> Thanks.
>>
>>
>>
>>
>>
>> Jun-ha, Park
>>
>> Freshman of Cloud Technical Group KINX corp., Korea
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> The optimum environment for developing open source code consists of open
> source tools.
>
> The majority of the open source development community uses open source
> tools for development. The community is best able to offer support for
> use of open source tools.
>
> If you need help accessing, and finding ways to learn how to use, open
> source tools, please do ask.
>
> Thank you,
> Anita.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-05-02 Thread Morgan Fainberg
On Tue, Apr 26, 2016 at 4:25 PM, Guangyu Suo  wrote:

> I think there is a little misunderstanding over here, the key point about
> this problem is that you store your password as *plaintext* in the
> configuration file, maybe this password is also the password of many other
> systems. You can't stop the right person to do the right thing, if someone
> gets the encryped password, and he can also get into
>

We can engineer things for best practices. If this password is the "same as
many other systems" we need to have more conversations on why that is the
case and how we can encourage people to do the correct/best practice thing
and not share passwords. I know the exposure of a password to your
infrastructure control plane is a big thing, but shared passwords simply is
not something we should be engineering specific solutions for, instead we
should document the best practices and why they are important.


> the box, then he is the right person, just like somebody gets your
> password through "brute force" attack, you can't stop him to do the right
> thing. If someone gets the entryped password, but he can not get into the
> box, he can do nothing, and this is our goal. So I think split the global
> configuration file to "general" and "secure" files, and encrypt the secure
> file is the right way to do this.
>
>
> 2016-04-26 16:05 GMT-05:00 Doug Hellmann :
>
>> Excerpts from Morgan Fainberg's message of 2016-04-26 10:17:30 -0500:
>> > On Tue, Apr 26, 2016 at 9:24 AM, Jordan Pittier <
>> jordan.pitt...@scality.com>
>> > wrote:
>> >
>> > >
>> > >
>> > > On Tue, Apr 26, 2016 at 3:32 PM, Daniel P. Berrange <
>> berra...@redhat.com>
>> > > wrote:
>> > >
>> > >> On Tue, Apr 26, 2016 at 08:19:23AM -0500, Doug Hellmann wrote:
>> > >> > Excerpts from Guangyu Suo's message of 2016-04-26 07:28:42 -0500:
>> > >> > > Hello, oslo team
>> > >> > >
>> > >> > > For now, some sensitive options like password or token are
>> configured
>> > >> as
>> > >> > > plaintext, anyone who has the priviledge to read the configure
>> file
>> > >> can get
>> > >> > > the real password, this may be a security problem that can't be
>> > >> > > unacceptable for some people.
>> > >>
>> > > It's not a security problem if your config files have the proper
>> > > permissions.
>> > >
>> > >
>> > >> > >
>> > >> > > So the first solution comes to my mind is to encrypt these
>> options
>> > >> when
>> > >> > > configuring them and decrypt them when reading them in
>> oslo.config.
>> > >> This is
>> > >> > > a bit like apache/openldap did, but the difference is these
>> softwares
>> > >> do a
>> > >> > > salt hash to the password, this is a one-way encryption that
>> can't be
>> > >> > > decrypted, these softwares can recognize the hashed value. But
>> if we
>> > >> do
>> > >> > > this work in oslo.config, for example the admin_password in
>> > >> > > keystone_middleware section, we must feed the keystone with the
>> > >> plaintext
>> > >> > > password which will be hashed in keystone to compare with the
>> stored
>> > >> hashed
>> > >> > > password, thus the encryped value in oslo.config must be
>> decryped to
>> > >> > > plaintext. So we should encrypt these options using symmetrical
>> or
>> > >> > > unsymmetrical method with a key, and put the key in a well
>> secured
>> > >> place,
>> > >> > > and decrypt them using the same key when reading them.
>> > >>
>> > > The issue here is to find a "well secured place". We should not only
>> move
>> > > the problem somewhere else.
>> > >
>> > >
>> > >> > >
>> > >> > > Of course, this feature should be default closed. Any ideas?
>> > >> >
>> > >> > Managing the encryption keys has always been the issue blocking
>> > >> > implementing this feature when it has come up in the past. We
>> can't have
>> > >> > oslo.config rely on a separate OpenStack service for key
>> management,
>> > >> > because presumably that service would want to use oslo.config and
>> then
>> > >> > we have a dependency cycle.
>> > >> >
>> > >> > So, we need a design that lets us securely manage those encryption
>> keys
>> > >> > before we consider adding encryption. If we solve that, it's then
>> > >> > probably simpler to encrypt an entire config file instead of
>> worrying
>> > >> > about encrypting individual values (something like how ansible
>> vault
>> > >> > works).
>> > >>
>> > >> IMHO encrypting oslo config files is addressing the wrong problem.
>> > >> Rather than having sensitive passwords stored in the main config
>> > >> files, we should have them stored completely separately by a secure
>> > >> password manager of some kind. The config file would then merely
>> > >> contain the name or uuid of an entry in the password manager. The
>> > >> service (eg nova-compute) would then query that password manager
>> > >> to get the actual sensitive password data it requires. At this point
>> > >> oslo.config does not need to know/care about encryption of its data
>> > >> as there's no longer 

Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-05-02 Thread Armando M.
On 30 April 2016 at 14:24, Fawad Khaliq  wrote:

> Hi folks,
>
> Hope everyone had a great summit in Austin and got back safe! :)
>
> At the design summit, we had a Neutron stadium evolution session, which
> needs your immediate attention as it will impact many stakeholders of
> Neutron.
>

It's my intention to follow up with a formal spec submission to
neutron-specs as soon as I recover from the trip. Then you'll have a more
transparent place to voice your concern.


>
> To summarize for everyone, our Neutron leadership made the following
> proposal for the “greater-good” of Neutron to improve and reduce burden on
> the Neutron PTL and core team to avoid managing more Neutron drivers:
>

It's not just about burden. It's about consistency first and foremost.


>
> Quoting the etherpad [1]
>
> "No request for inclusion are accepted for projects focussed solely on
> implementations and/or API extensions to non-open solutions."
>

By the way, this was brought forward and discussed way before the Summit.
In fact this is already implemented at the Neutron governance level [1].


> To summarize for everyone what this means is that all Neutron drivers,
> which implement non open source networking backends are instantly out of
> the Neutron stadium and are marked as "unofficial/unsupported/remotely
> affiliated" and rest are capable of being tagged as "supported/official”.
>

Totally false.

All this means is that these projects do not show up in list [1] (minus
[2], which I forgot): ie. these projects are the projects the Neutron team
vouches for. Supportability is not a property tracked by this list. You,
amongst many, should know that it takes a lot more than being part of a
list to be considered a supported solution, and I am actually even
surprised that you are misled/misleading by bringing 'support' into this
conversation.

[1] http://governance.openstack.org/reference/projects/neutron.html
[2] https://review.openstack.org/#/c/309618/


>
> This eliminates all commercial Neutron drivers developed for many service
> providers and enterprises who have deployed OpenStack successfully with
> these drivers. It’s unclear how the OpenStack Foundation will communicate
> its stance with all the users but clearly this is a huge set back for
> OpenStack and Neutron. Neutron will essentially become closed to all
> existing, non-open drivers, even if these drivers have been compliant with
> Neutron API for years and users have them deployed in production, forcing
> users to re-evaluate their options.
>

Again, totally false.

The Neutron team will continue to stand behind the APIs and integration
mechanisms in a way that made the journey of breaking down the codebase as
we know it today possible. Any discussion of evolving these has been done
and will be done in the open and with the support of all parties involved,
non-open solutions included.


>
> Furthermore, this proposal will erode confidence in Neutron and OpenStack,
> and destroy much of the value that the community has worked so hard to
> build over the years.
>

> As a representative and member of the OpenStack community and maintainer
> of a Neutron driver (since Grizzly), I am deeply disappointed and disagree
> with this statement [2]. Tossing out all the non-open solutions is not in
> the best interest of the end user companies that have built working
> OpenStack clusters. This proposal will lead OpenStack end users who
> deployed different drivers to think twice about OpenStack communities’
> commitment to deliver solutions they need. Furthermore, this proposal
> punishes OpenStack companies who developed commercial backend drivers to
> help end users bring up OpenStack clouds.
>

What? Now you're just spreading FUD.

What is being discussed in that etherpad is totally in line with [1], which
you approved and stood behind, by the way! No-one is breaking anything,
we're simply better reflecting what initiatives the Neutron core team is
supposed to be accountable for and, as a result, empower the individual
core teams of those vendor drivers. I appreciate there might be a gap in
where to describe the effort of these initiatives in [2], but I believe
there's something like the marketplace [3] that's better suited for what
you're after. IMO, [2] was never intended to be that place, and I stand
corrected if not.

[1] https://review.openstack.org/#/c/309618/
[2] http://governance.openstack.org/
[3] https://www.openstack.org/marketplace/drivers/


> Also, we have to realize that this proposal divides the community rather
> than unifying it. If it proceeds, it seems all OpenStack projects should
> follow for consistency. For example, this should apply to Nova which means
> HyperV and vShphere can't be part of Nova, PLUMgrid can't be part of Kuryr,
> and ABC company cannot have a driver/plugin for a XYZ project.
>

Every project is different, comparing Nova to Neutron or Cinder etc is not
a like-for-like comparison.


>
> Another thing to note 

Re: [openstack-dev] [nova] Distributed Database

2016-05-02 Thread Jay Pipes

On 05/02/2016 11:51 AM, Mike Bayer wrote:

On 05/02/2016 07:38 AM, Matthieu Simonin wrote:

As far as we understand the idea of an ORM is to hide the relational
database with an Object oriented API.


I actually disagree with that completely.  The reason ORMs are so
maligned is because of this misconception; developer attempts to use an
ORM so that they will need not have to have any awareness of their
database, how queries are constructed, or even its schema's design;
witness tools such as Django ORM and Rails ActiveRecord which promise
this.   You then end up with an inefficient and unextensible mess
because the developers never considered anything about how the database
works or how it is queried, nor do they even have easy ways to monitor
or control it while still making use of the tool.   There are many blog
posts and articles that discuss this and it is in general known as the
"object relational impedance mismatch".

SQLAlchemy's success comes from its rejection of this entire philosophy.
  The purpose of SQLAlchemy's ORM is not to "hide" anything but rather
to apply automation to the many aspects of relational database
communication as well as row->object mapping that otherwise express
themselves in an application as either a large amount of repetitive
boilerplate throughout an application or as an awkward series of ad-hoc
abstractions that don't really do the job very well.   SQLAlchemy is
designed to expose both the schema design as well as the structure of
queries completely.   My talk at [1] goes into this topic in detail
including specific API architectures that facilitate this concept.

It's for that reason that I've always rejected notions of attempting to
apply SQLAlchemy directly on top of a datastore that is explicitly
non-relational.   By doing so, you remove a vast portion of the
functionality that relational databases provide and there's really no
point in using a tool like SQLAlchemy that is very explicit about DDL
and SQL on top of that kind of database.

To effectively put SQLAlchemy on top of a non-relational datastore, what
you really want to do is build an entire SQL engine on top of it.  This
is actually feasible; I was doing work for the now-defunct FoundationDB
(was bought by Apple) who had a very good implementation of
SQL-on-top-of-distributed keystore going, and the Cockroach and TiDB
projects you mention are definitely the most appropriate choice to take
if a certain variety of distribution underneath SQL is desired.


Well said, Mike, on all points above.




But also, w.r.t. Cells there seems to be some remaining debate over why
exactly a distributed approach is even needed.  As others have posted, a
single MySQL database, replicated across Galera or not, scales just fine
for far more data than Nova ever needs to store.  So it's not clear why
the need for a dramatic rewrite of its datastore is called for.


Cells(v1) in Nova *already has* completely isolated DB/MQ for each cell, 
and there are bunch of 
duplicated-but-slightly-different-and-impossible-to-maintain code paths 
in the scheduler and compute manager. Part of the cellsv2 effort is to 
remove these duplicated code paths.


Cells are, as much as anything else, an answer to an MQ scaling problem, 
less so an answer to a DB scaling problem. Having a single MQ bus for 
tens of thousands of compute nodes is just not tenable -- at least with 
the message passing patterns and architecture that we use today...


Finally, Cells also represent a failure domain for the control plane. If 
a network partition occurs between a cell and the top-level API layer, 
no other cell is affected by the disruption.


Now, what does all this mean with regards to whether to use a single 
distributed database solution versus a single RDBMS versus many isolated 
RDBMS instances per cell? Not sure. Arguments can be made for all three 
approaches clearly. Depending on what folks' priorities are with regards 
to simplicity, scale, and isolation of failure domains, the "right" 
choice is tough to determine.


On the one hand, using a single distributed datastore like Cassandra for 
everything would make things conceptually easy to reason about and make 
OpenStack clouds much easier to deploy at scale.


On the other hand, porting things to Cassandra (or any other NoSQL 
solution) would require a complete overhaul of the way *certain* data is 
queried in the Nova subsystems. Examples of Cassandra's poor fit for 
some types of data are quota and resource usage aggregation queries. 
While Cassandra does support some aggregation via CQL in recent 
Cassandra versions, Cassandra simply wasn't built for this kind of data 
access pattern and doing anything reasonably complicated would require 
using Pig/Hive/Map-Reduce-y stuff, which kind of defeats the simplicity 
arguments.


Where queries against resource usage aggregate data are a great fit for 
a single RDBMS system, doing these queries across multiple RDBMS systems 
(i.e. in the child cell 

Re: [openstack-dev] [heat] update_allowed vs. immutable

2016-05-02 Thread Jason Dunsmore
Hi Praveen,


The docs you referred to in the plugin guide is for the resource property 
attributes - they have nothing to do with parameters.  This is an important 
distinction because there is also an "immutable" parameter attribute.


The "immutable" property attribute was added because an equivalent to AWS' 
"Updates are not supported" functionality was needed:

https://specs.openstack.org/openstack/heat-specs/specs/juno/implement-aws-updates-not-supported.html#aws-cloudformation


Jason



From: Praveen Yalagandula 
Sent: Monday, May 2, 2016 11:55 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [heat] update_allowed vs. immutable

What is the difference between "update_allowed" and "immutable" parameters for 
a property? According to the plugin guide at
http://docs.openstack.org/developer/heat/developing_guides/pluginguide.html:

update_allowed:
True if an existing resource can be updated, False means update is accomplished 
by delete and re-create. Default is False.

immutable:
True means updates are not supported, resource update will fail on every change 
of this property. False otherwise. Default is False.

Since any resource can be deleted and then re-created, it seems 
"update_allowed" is the right parameter to define. Why do we need "immutable"?

Thanks,
Praveen Yalagandula
Avi Networks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Timeframe for naming the P release?

2016-05-02 Thread Shamail Tahir
Hi everyone,

When will we name the P release of OpenStack?  We named two releases
simultaneously (Newton and Ocata) during the Mitaka release cycle.  This
gave us the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata)
releases.

If we were to vote for the name of the P release soon (since the location
is now known) we would be able to have names associated with the current
release cycle (Newton), N+1 (Ocata), and N+2 (P).  This would also allow us
to get back to only voting for one name per release cycle but consistently
have names for N, N+1, and N+2.

-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] composable roles team

2016-05-02 Thread Brent Eagles
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 05/01/2016 08:01 PM, Emilien Macchi wrote:



>> If a feature can't land without disruption, then why not using a
>> special branch to be merged once the feature is complete ?
> 
> The problem is that during our work, some people will update the 
> manifests, and it will affect us, since we're copy/pasting the
> code somewhere else (in puppet-tripleo), that's why we might need
> some outstanding help from the team, to converge to the new model. 
> I know asking that is tough, but if we want to converge quickly,
> we need to make the adoption accepted by everyone. One thing we can
> do, is asking our reviewer team to track the patches that will need
> some work, and the composable team can help in the review process.
> 
> The composable roles is a feature that we all wait, having the
> help from our contributors will really save us time.

s/wait/want/ I expect.

Well said. I understand the reservations on the -1 for non-composable
role patches. It *does* feel a bit strong, but in the end I think it's
just being honest. The likelihood that a patch on tht is going to land
"as is" prior to the merge of the related composable role changes
seems really unlikely. I for one, am willing to do what I can to help
anyone who has had their patch pre-empted during this period get their
patch refactored/ported once the comp-roles thing has settled down.

Cheers,

Brent

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXJ5KpAAoJEIXWptqvFlBWmA8P/1dBlsCNYIqOHBBWxzEnLM41
gP/K+UsGFHaXj86yOdus5gp58/JFX9KJ+mqr0Yi/8ail+h+t0yjgcCXLlp6HUTKo
7OtNfAzPMDeDkquB5R7WREJfLdtP7tVpBsd0Ezs00y5ZDUuDk/J0waleQFtAKUjr
Xiip2y/e8tZMdWa0gvp/q+kWJ3v+YhAnl9PNQMCeIGf/IwDQrTNYvDTIChLx6dud
g7tWfH+ej6nL/ty8UM4R3ac94ZyZLrxprShbdpAh798kYhrR1WPju+hmBgln8rlx
fcTzXq8b428QzCmNKFeKuNmP32yXjOCZlEi2/NijfiR7nFY6sLvh7ROIODiwmzx8
fPSb1W8bLqIijeAUy2YpZFfvbe+NZdn2iIHjseS6Yu4D85NakUunkrBJEpbnCy8L
26N9ShseHbVRckpMSyxEyi+jJfJcCp4FzR26SUFUamPcusMQVlBDQlhOh/8lr/Lq
frhxcYCn45JZ/R2pc3PS2HnRapmvM/TLxdFhbteUFMcEXBT4dvdQPcQdqH1Kx/Yw
S5C+1CESRMGH2KpqghHaMNnySYHFHYQNmCKEVfJjERGbI/U5dEEogIUuzHXHQlYV
kL83XvMh6gGHfRwbmeTOLsrR86c8+u3vaE5PzHPxQ3IBseezmRYiN2fmNclYsg8B
LvyHYCRNvOcj1y8gr0Yr
=obJ6
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Glare meeting SKIPPED this week.

2016-05-02 Thread Nikhil Komawar
Hi all,

We skipped the meeting today as most of the team is either OOO or
catching-up after summit. We have tentatively cancelled the next week's
meeting as well however, if there is interest I can plan to open the agenda.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-05-02 Thread Jonathan Proulx
On Mon, May 02, 2016 at 11:41:58AM -0700, Morgan Fainberg wrote:
:On Mon, May 2, 2016 at 11:32 AM, Adam Young  wrote:

:> Kerberos would work, too, for deployments that prefer that form of
:> Authentication.  We can document this, but do not need to implement.
:>
:>
:Never hurts to have alternatives.

Not sure howmany people have this use case, but I do so +1 from me :)

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Could you tell me the best way to develop openstack in Windows?

2016-05-02 Thread Jeremy Stanley
On 2016-05-02 17:20:13 +0900 (+0900), �� wrote:
[...]
> But I��m using PyCharm in the Windows 7 but, It��s very hard to
> programming and testing when I tried to change code little bit.
[...]

In my experience, when developing software which runs primarily on
Linux it's a lot more complicated to attempt to test it on non-Linux
systems than it is to learn to use Linux instead (at least in a
virtual machine somewhere). Similarly, I would not expect to be able
to comfortably develop software under Linux when it's intended to
mostly run on Microsoft Windows or Apple Macintosh.

I personally just use Linux as my development platform. I gather
some in our community prefer to use Mac or Win systems but I think
in most cases they end up augmenting their development workflow with
Linux virtual machines (either local or at a remote service
provider).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-05-02 Thread Gal Sagie
Maybe it can help if instead of trying to define criteria to which projects
dont fit into
the stadium, try to define in your spec what IT IS, and for what purpose
its there.


On Mon, May 2, 2016 at 8:53 PM, Kyle Mestery  wrote:

> On Mon, May 2, 2016 at 12:22 PM, Armando M.  wrote:
> >
> >
> > On 30 April 2016 at 14:24, Fawad Khaliq  wrote:
> >>
> >> Hi folks,
> >>
> >> Hope everyone had a great summit in Austin and got back safe! :)
> >>
> >> At the design summit, we had a Neutron stadium evolution session, which
> >> needs your immediate attention as it will impact many stakeholders of
> >> Neutron.
> >
> >
> > It's my intention to follow up with a formal spec submission to
> > neutron-specs as soon as I recover from the trip. Then you'll have a more
> > transparent place to voice your concern.
> >
> >>
> >>
> >> To summarize for everyone, our Neutron leadership made the following
> >> proposal for the “greater-good” of Neutron to improve and reduce burden
> on
> >> the Neutron PTL and core team to avoid managing more Neutron drivers:
> >
> >
> > It's not just about burden. It's about consistency first and foremost.
> >
> >>
> >>
> >> Quoting the etherpad [1]
> >>
> >> "No request for inclusion are accepted for projects focussed solely on
> >> implementations and/or API extensions to non-open solutions."
> >
> >
> > By the way, this was brought forward and discussed way before the
> Summit. In
> > fact this is already implemented at the Neutron governance level [1].
> >
> >>
> >> To summarize for everyone what this means is that all Neutron drivers,
> >> which implement non open source networking backends are instantly out
> of the
> >> Neutron stadium and are marked as "unofficial/unsupported/remotely
> >> affiliated" and rest are capable of being tagged as
> "supported/official”.
> >
> >
> > Totally false.
> >
> > All this means is that these projects do not show up in list [1] (minus
> [2],
> > which I forgot): ie. these projects are the projects the Neutron team
> > vouches for. Supportability is not a property tracked by this list. You,
> > amongst many, should know that it takes a lot more than being part of a
> list
> > to be considered a supported solution, and I am actually even surprised
> that
> > you are misled/misleading by bringing 'support' into this conversation.
> >
> > [1] http://governance.openstack.org/reference/projects/neutron.html
> > [2] https://review.openstack.org/#/c/309618/
> >
> >>
> >>
> >> This eliminates all commercial Neutron drivers developed for many
> service
> >> providers and enterprises who have deployed OpenStack successfully with
> >> these drivers. It’s unclear how the OpenStack Foundation will
> communicate
> >> its stance with all the users but clearly this is a huge set back for
> >> OpenStack and Neutron. Neutron will essentially become closed to all
> >> existing, non-open drivers, even if these drivers have been compliant
> with
> >> Neutron API for years and users have them deployed in production,
> forcing
> >> users to re-evaluate their options.
> >
> >
> > Again, totally false.
> >
> > The Neutron team will continue to stand behind the APIs and integration
> > mechanisms in a way that made the journey of breaking down the codebase
> as
> > we know it today possible. Any discussion of evolving these has been done
> > and will be done in the open and with the support of all parties
> involved,
> > non-open solutions included.
> >
> >>
> >>
> >> Furthermore, this proposal will erode confidence in Neutron and
> OpenStack,
> >> and destroy much of the value that the community has worked so hard to
> build
> >> over the years.
> >>
> >>
> >> As a representative and member of the OpenStack community and maintainer
> >> of a Neutron driver (since Grizzly), I am deeply disappointed and
> disagree
> >> with this statement [2]. Tossing out all the non-open solutions is not
> in
> >> the best interest of the end user companies that have built working
> >> OpenStack clusters. This proposal will lead OpenStack end users who
> deployed
> >> different drivers to think twice about OpenStack communities’
> commitment to
> >> deliver solutions they need. Furthermore, this proposal punishes
> OpenStack
> >> companies who developed commercial backend drivers to help end users
> bring
> >> up OpenStack clouds.
> >
> >
> > What? Now you're just spreading FUD.
> >
> > What is being discussed in that etherpad is totally in line with [1],
> which
> > you approved and stood behind, by the way! No-one is breaking anything,
> > we're simply better reflecting what initiatives the Neutron core team is
> > supposed to be accountable for and, as a result, empower the individual
> core
> > teams of those vendor drivers. I appreciate there might be a gap in
> where to
> > describe the effort of these initiatives in [2], but I believe there's
> > something like the marketplace [3] that's better suited for what you're
> > after. 

[openstack-dev] [nova] FYI: Citrix XenServer CI is disabled from voting

2016-05-02 Thread Matt Riedemann
The Citrix XenServer CI is failing on most, if not all, changes it's 
running on today. Here is an example failure [1]. Devstack fails to 
setup due to a bad package install, so I'm guessing there is a problem 
in a mirror being used. I don't know if this is 100% failure but it's 
high enough to disable it from voting on new nova patch sets.


If you hit this, simply comment on your patch with 'recheck'.

[1] 
http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/40/309440/7/13518//run_tests.log


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-02 Thread Monty Taylor

On 05/02/2016 01:53 PM, Shamail Tahir wrote:

Hi everyone,

When will we name the P release of OpenStack?  We named two releases
simultaneously (Newton and Ocata) during the Mitaka release cycle.  This
gave us the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata)
releases.

If we were to vote for the name of the P release soon (since the
location is now known) we would be able to have names associated with
the current release cycle (Newton), N+1 (Ocata), and N+2 (P).  This
would also allow us to get back to only voting for one name per release
cycle but consistently have names for N, N+1, and N+2.


Patches already up ...

https://review.openstack.org/#/c/310425/

https://review.openstack.org/#/c/310426/

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Martin André
On Mon, May 2, 2016 at 1:23 PM, Jeff Peeler  wrote:
> On Sun, May 1, 2016 at 5:03 PM, Steven Dake (stdake)  wrote:
>> I don't think a separate repository is the correct approach based upon one
>> off private conversations with folks at summit. Many people from that list
>> approached me and indicated they would like to see the work integrated in
>> one repository as outlined in my vote proposal email.  The reasons I heard
>> were:
>>
>> Better integration of the community
>> Better integration of the code base
>> Doesn't present an us vs them mentality that one could argue happened during
>> kolla-mesos
>> A second repository makes k8s a second class citizen deployment architecture
>> without a voice in the full deployment methodology
>> Two gating methods versus one
>> No going back to a unified repository while preserving git history
>>
>> I favor of the separate repositories I heard
>>
>> It presents a unified workspace for kubernetes alone
>> Packaging without ansible is simpler as the ansible directory need not be
>> deleted
>>
>> There were other complaints but not many pros.  Unfortunately I failed to
>> communicate these complaints to the core team prior to the vote, so now is
>> the time for fixing that.
>
> I favor the repository split, but the reason being is that I think
> Ansible along with Kubernetes should each be a separate repository.
> Keeping a monolithic repository is the opposite of the "Unix
> philosophy". It was even recommended at one point to split every
> single service into a separate repository [1].
>
> Repository management, backports, and additional gating are all things
> that I'll admit are more work with more than one single repository.
> However, the ease of ramping up where everything is separated out
> makes it worth it in my opinion. I believe the success of a given
> community is partially due to proper delineation of expertise
> (otherwise, why not put all OpenStack services in one gigantic repo?).
> I'm echoing this comment somebody said at the summit: stretching the
> core team across every orchestration tool is not scalable. I'm really
> hoping more projects will grow around the Kolla ecosystem and can do
> so without being required to become proficient with every other
> orchestration system.
>
> One argument for keeping a single repository is to compare to the
> mesos effort (that has stopped now) in a different repository. But as
> it has already been said, mesos should have been given fairness with
> ansible split out as well. If everything were in a single repository,
> it has been suggested that the community will review more. However, I
> don't personally believe that with gerrit in use that affects things
> at all. OpenStack even has a gerrit dashboard creator [2], but I think
> developers are capable enough at easily finding what they want to
> consistently review.
>
> As I said in a previous reply [3], I don't think git history should
> affect this decision as we can make it work in either scenario. ACL
> permissions seem overly complicated to be in the same repository, even
> if we can arrange for a feature branch to have different permissions
> from the main repo.
>
> My views here are definitely focused on the long term view. If any
> short term plans can be made to allow ourselves to eventually align
> with having separate repositories, I don't think I'd have a problem
> with that. However, I thought the Ansible code was supposed to have
> been separated out a long time ago. This is a natural inflection point
> to change policy and mode of operating, which is why I don't enjoy the
> idea of waiting any longer. Luckily, having Ansible in the same
> repository currently does not inhibit any momentum with Kubernetes in
> a separate repository.
>
> As far as starting the repositories split and then merging them in the
> future (assuming Ansible also stays in one repo), I don't know why we
> would want that. But perhaps after the Kubernetes effort has
> progressed we can better determine if that makes sense with a clear
> view of what the project files actually end up looking like. I don't
> think that any project that changes the containers' ABI is suitable to
> be labeled as "Kolla", so there wouldn't be any dockerfiles part of
> the repository.
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-April/093213.html
> [2] https://github.com/openstack/gerrit-dash-creator
> [3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/093645.html

Jeff, I 100% agree here.

The strongest argument for single repo, IMO, is that it facilitates
code reviews for changes affecting both the container images and the
deployment tool, and makes it easier for backports.
On the other hand, having "better integration" as Steve put it, is not
always desirable: deployment tools each come with their specific
features and philosophy and I'd hate to see a patch receive negative
feedback because the developers can't agree on 

[openstack-dev] FW: [trove] Cancel weekly Trove meeting this week?

2016-05-02 Thread Amrith Kumar
The Trove weekly meeting on the 4th of May is canceled. We can pick up any 
conversations that we need on IRC as required.

Thanks,

-amrith

> -Original Message-
> From: Amrith Kumar [mailto:amr...@tesora.com]
> Sent: Monday, May 02, 2016 11:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [trove] Cancel weekly Trove meeting this week?
> 
> We all had a week full of meetings last week and I'm not sure that we have
> anything significantly new on the agenda for the weekly meeting on the
> 4th.
> 
> So, if I don't hear anyone object, I'll send an email out by 4pm Eastern
> Time today canceling the Trove weekly meeting for the 4th of May.
> 
> Thanks,
> 
> -amrith
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Davanum Srinivas
Sorry for top-post...

There's a third option : Using Feature branch for Kubernetes with a
custom gerrit group

* Feature branch can be sync'ed from Master periodically
* Feature branch can have it's own separate gerrit group.
* We can opt to merge from feature branch to master if necessary.
* We can have minimum changes in the feature branch (only that is
needed for k8s work). Everything else should hit master first and then
sync'ed to the branch.
* We should have a deadline for the feature branch when we think
appropriate (Say Newton-2 Milestone?)
* We can define jobs that run only on feature branch
* I'll assume that folks get promoted as they earn karma from just the
feature group to the main group.
* At some point (Newton-2) we take a go/no-go on k8s feature for the
Newton release, if we say no-go, then the feature branch remains for
on-going work while the master branch can be made release-ready.

Worst case scenario, we nuke the feature branch as an experiment
Best case scenario, we can choose either to make the feature branch
into master OR figure out how to split the contents into another repo.
We don't have to decide right now.

Thanks,
Dims

On Mon, May 2, 2016 at 3:38 PM, Steven Dake (stdake)  wrote:
> Yup but that didn't happen with kolla-mesos and I didn't catch it until 2
> weeks after it was locked in stone.  At that point I asked for the ABI to
> be unified to which I got a "shrug" and no action.
>
> If it has been in one repo, everyone would have seen the multiple ABIs and
> rejected the patch in the first place.
>
> FWIW I am totally open to extending the ABI however is necessary to make
> Kolla containers be the reference that other projects use for their
> container deployment technology tooling.  In this case the ABI was
> extended without consultation and without repair after the problem was
> noticed.
>
> Regards
> -steve
>
> On 5/2/16, 12:04 PM, "Fox, Kevin M"  wrote:
>
>>+1 to one set of containers for all. If kolla-k8s needs tweaks to the
>>abi, the request should go to the kolla core team (involving everyone)
>>and discuss why they are needed/reasonable. This should be done
>>regardless of if there are 1or 2 repo's in the end.
>>
>>Thanks,
>>Kevin
>>
>>From: Steven Dake (stdake) [std...@cisco.com]
>>Sent: Monday, May 02, 2016 11:14 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>>I personally would like to see one set of defaults files for the default
>>config and merging thereof. (the stuff in roles/*/defaults).
>>
>>There would be overlap there.
>>
>>A lot of the overlap involves things like reno, sphinx, documentation,
>>gating, etc.
>>
>>During kolla-emsos, separate containers (IIRC) were made, separate start
>>extension scripts were made, and to my dismay a completely different ABI
>>was implemented.
>>
>>We need one ABI to the containers and that should be laid out in the spec
>>if it isn't already.
>>
>>Regards
>>-steve
>>
>>
>>On 5/2/16, 10:31 AM, "Ryan Hallisey"  wrote:
>>
>>>Most of the code is not an overlap. We will preserve the ABI while
>>>customizing the ansible config generation (if we do end up using it). We
>>>can use some of what's in kolla as a starting point.
>>>
>>>I'd say the code overlap is a bootstrapping point for the project.
>>>
>>>-Ryan
>>>
>>>- Original Message -
>>>From: "Kevin M Fox" 
>>>To: "OpenStack Development Mailing List (not for usage questions)"
>>>
>>>Sent: Monday, May 2, 2016 12:56:22 PM
>>>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>>>
>>>One thing we didn't talk about too much at the summit is the part of the
>>>spec that says we will reuse a bunch of ansible stuff to generate configs
>>>for the k8s case...
>>>
>>>Do we believe that code would be minimal and not impact separate repo's
>>>much or is the majority of the work in the end going to be focused there?
>>>If most of the code ends up landing there, then its probably not worth
>>>splitting?
>>>
>>>Thanks,
>>>Kevin
>>>
>>>From: Steven Dake (stdake) [std...@cisco.com]
>>>Sent: Monday, May 02, 2016 6:05 AM
>>>To: OpenStack Development Mailing List (not for usage questions)
>>>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>>>
>>>On 5/1/16, 10:32 PM, "Swapnil Kulkarni"  wrote:
>>>
On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
 wrote:
> Although it seems I'm in the minority, I am in favor of unified repo.
>
> From: "Steven Dake (stdake)" 
> Reply-To: "OpenStack Development Mailing List (not for usage
>questions)"
> 
> Date: Sunday, May 1, 2016 at 5:03 PM
> To: "OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] Timeframe for naming the P release?

2016-05-02 Thread Shamail Tahir
On Mon, May 2, 2016 at 2:58 PM, Monty Taylor  wrote:

> On 05/02/2016 01:53 PM, Shamail Tahir wrote:
>
>> Hi everyone,
>>
>> When will we name the P release of OpenStack?  We named two releases
>> simultaneously (Newton and Ocata) during the Mitaka release cycle.  This
>> gave us the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata)
>> releases.
>>
>> If we were to vote for the name of the P release soon (since the
>> location is now known) we would be able to have names associated with
>> the current release cycle (Newton), N+1 (Ocata), and N+2 (P).  This
>> would also allow us to get back to only voting for one name per release
>> cycle but consistently have names for N, N+1, and N+2.
>>
>
> Patches already up ...
>
> https://review.openstack.org/#/c/310425/
>
> https://review.openstack.org/#/c/310426/


Thanks Monty.  Shouldn't we just starting naming one release again?  This
will give the community the names of the next three releases
consistently... If we would for two of them at a time then we will have
some periods where four names will be known and others where we might only
know three (i.e. during Ocata).


>
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Michał Jastrzębski
I agree that we need one set of containers. Containers are kolla,
deployment tools are consumers of kolla. We need our containers rock
solid and decoupled from whatever is deploying them. Every company ops
shop have their tooling and methods, let's help them.

I'm not super in favor of ansible split, my personal fav is to have
kolla repo with few deployments tools, but *all of them stable*. What
is in kolla should be prod ready.

So I'm for second repo for k8s for time being.

On 2 May 2016 at 14:04, Fox, Kevin M  wrote:
> +1 to one set of containers for all. If kolla-k8s needs tweaks to the abi, 
> the request should go to the kolla core team (involving everyone) and discuss 
> why they are needed/reasonable. This should be done regardless of if there 
> are 1or 2 repo's in the end.
>
> Thanks,
> Kevin
> 
> From: Steven Dake (stdake) [std...@cisco.com]
> Sent: Monday, May 02, 2016 11:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>
> I personally would like to see one set of defaults files for the default
> config and merging thereof. (the stuff in roles/*/defaults).
>
> There would be overlap there.
>
> A lot of the overlap involves things like reno, sphinx, documentation,
> gating, etc.
>
> During kolla-emsos, separate containers (IIRC) were made, separate start
> extension scripts were made, and to my dismay a completely different ABI
> was implemented.
>
> We need one ABI to the containers and that should be laid out in the spec
> if it isn't already.
>
> Regards
> -steve
>
>
> On 5/2/16, 10:31 AM, "Ryan Hallisey"  wrote:
>
>>Most of the code is not an overlap. We will preserve the ABI while
>>customizing the ansible config generation (if we do end up using it). We
>>can use some of what's in kolla as a starting point.
>>
>>I'd say the code overlap is a bootstrapping point for the project.
>>
>>-Ryan
>>
>>- Original Message -
>>From: "Kevin M Fox" 
>>To: "OpenStack Development Mailing List (not for usage questions)"
>>
>>Sent: Monday, May 2, 2016 12:56:22 PM
>>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>>One thing we didn't talk about too much at the summit is the part of the
>>spec that says we will reuse a bunch of ansible stuff to generate configs
>>for the k8s case...
>>
>>Do we believe that code would be minimal and not impact separate repo's
>>much or is the majority of the work in the end going to be focused there?
>>If most of the code ends up landing there, then its probably not worth
>>splitting?
>>
>>Thanks,
>>Kevin
>>
>>From: Steven Dake (stdake) [std...@cisco.com]
>>Sent: Monday, May 02, 2016 6:05 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>>On 5/1/16, 10:32 PM, "Swapnil Kulkarni"  wrote:
>>
>>>On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
>>> wrote:
 Although it seems I'm in the minority, I am in favor of unified repo.

 From: "Steven Dake (stdake)" 
 Reply-To: "OpenStack Development Mailing List (not for usage
questions)"
 
 Date: Sunday, May 1, 2016 at 5:03 PM
 To: "OpenStack Development Mailing List (not for usage questions)"
 
 Subject: [openstack-dev] [kolla][kubernetes] One repo vs two

 Ryan had rightly pointed out that when we made the original proposal
9am
 morning we had asked folks if they wanted to participate in a separate
 repository.

 I don't think a separate repository is the correct approach based upon
one
 off private conversations with folks at summit.  Many people from that
list
 approached me and indicated they would like to see the work integrated
in
 one repository as outlined in my vote proposal email.  The reasons I
heard
 were:

 Better integration of the community
 Better integration of the code base
 Doesn't present an us vs them mentality that one could argue happened
during
 kolla-mesos
 A second repository makes k8s a second class citizen deployment
architecture
 without a voice in the full deployment methodology
 Two gating methods versus one
 No going back to a unified repository while preserving git history

 I favor of the separate repositories I heard

 It presents a unified workspace for kubernetes alone
 Packaging without ansible is simpler as the ansible directory need not
be
 deleted

 There were other complaints but not many pros.  Unfortunately I failed
to
 communicate these complaints to the core team prior to the vote, so now
is

Re: [openstack-dev] [magnum] Proposed Revision to Magnum's Mission

2016-05-02 Thread Hongbin Lu
Hi team,

As Adrian mentioned, we decided to narrow the scope of the Magnum project, and 
needed to revise Magnum’s mission statement to reflect our mission clearly and 
accurately. I would suggest to work on this effort as a team and created an 
etherpad for that:

https://etherpad.openstack.org/p/magnum-mission-statement

Your comments on the etherpad will be appreciated.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-29-16 8:47 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Proposed Revision to Magnum's Mission

Magnum Team,

In accordance with our Fishbowl discussion yesterday at the Newton Design 
Summit in Austin, I have proposed the following revision to Magnum’s mission 
statement:

https://review.openstack.org/311476

The idea is to narrow the scope of our Magnum project to allow us to focus on 
making popular COE software work great with OpenStack, and make it easy for 
OpenStack cloud users to quickly set up fleets of cloud capacity managed by 
chosen COE software (such as Swam, Kubernetes, Mesos, etc.). Cloud operators 
and users will value Multi-Tenancy for COE’s, tight integration with OpenStack, 
and the ability to source this all as a self-service resource.

We agreed to deprecate and remove the /containers resource from Magnum’s API, 
and will leave the door open for a new OpenStack project with its own name and 
mission to satisfy the interests of our community members who want an OpenStack 
API service that abstracts one or more COE’s.

Regards,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] going forward

2016-05-02 Thread John Dickinson
At the summit last week, the Swift community spent a lot of time discussing the 
feature/hummingbird branch. (For those who don't know, the feature/hummingbird 
branch contains some parts of Swift which have been reimplemented in Go.)

As a result of that summit discussion, we have a plan and a goal: we will 
integrate a subset of the current hummingbird work into Swift's master branch, 
and the future will contain both Python and Go code. We are starting with the 
object server and replication layer.

The high-level plan is below. I've included some general time estimates, but, 
as with all things open-source, these estimates are just that. This work will 
be done when it's done.

Our current short-term focus for Swift is to integrate the feature/crypto work 
to provide at-rest encryption. This crypto work is nearly ready to merge, and 
it is the community focus until it's done. While that crypto work is finishing 
up, we will be defining the minimum deployable functionality from hummingbird 
that is necessary before it can land on master. I expect the crypto work to be 
finished in the next six to eight weeks.

After feature/crypto is merged, as a community we will be implementing any 
missing things identified as necessary to merge. There will be some base 
functionality that needs to be implemented, and there will be a lot of things 
like docs, tests, and deployability work.

Our goal is to have a reasonably ready-to-merge feature branch ready by the 
Barcelona summit. Shortly after Barcelona, we will begin the actual merge of 
the Go code to master.

This work, this plan, and these goals do NOT mean that we are completely 
rewriting Swift in Go. Python will exist in Swift's codebase for a long time to 
come. Our goal is to keep doing the same thing we've been doing for years: 
focus on performance and user needs to give the best possible object storage 
system in the world.


--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Steven Dake (stdake)
Yup but that didn't happen with kolla-mesos and I didn't catch it until 2
weeks after it was locked in stone.  At that point I asked for the ABI to
be unified to which I got a "shrug" and no action.

If it has been in one repo, everyone would have seen the multiple ABIs and
rejected the patch in the first place.

FWIW I am totally open to extending the ABI however is necessary to make
Kolla containers be the reference that other projects use for their
container deployment technology tooling.  In this case the ABI was
extended without consultation and without repair after the problem was
noticed.

Regards
-steve

On 5/2/16, 12:04 PM, "Fox, Kevin M"  wrote:

>+1 to one set of containers for all. If kolla-k8s needs tweaks to the
>abi, the request should go to the kolla core team (involving everyone)
>and discuss why they are needed/reasonable. This should be done
>regardless of if there are 1or 2 repo's in the end.
>
>Thanks,
>Kevin
>
>From: Steven Dake (stdake) [std...@cisco.com]
>Sent: Monday, May 02, 2016 11:14 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>
>I personally would like to see one set of defaults files for the default
>config and merging thereof. (the stuff in roles/*/defaults).
>
>There would be overlap there.
>
>A lot of the overlap involves things like reno, sphinx, documentation,
>gating, etc.
>
>During kolla-emsos, separate containers (IIRC) were made, separate start
>extension scripts were made, and to my dismay a completely different ABI
>was implemented.
>
>We need one ABI to the containers and that should be laid out in the spec
>if it isn't already.
>
>Regards
>-steve
>
>
>On 5/2/16, 10:31 AM, "Ryan Hallisey"  wrote:
>
>>Most of the code is not an overlap. We will preserve the ABI while
>>customizing the ansible config generation (if we do end up using it). We
>>can use some of what's in kolla as a starting point.
>>
>>I'd say the code overlap is a bootstrapping point for the project.
>>
>>-Ryan
>>
>>- Original Message -
>>From: "Kevin M Fox" 
>>To: "OpenStack Development Mailing List (not for usage questions)"
>>
>>Sent: Monday, May 2, 2016 12:56:22 PM
>>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>>One thing we didn't talk about too much at the summit is the part of the
>>spec that says we will reuse a bunch of ansible stuff to generate configs
>>for the k8s case...
>>
>>Do we believe that code would be minimal and not impact separate repo's
>>much or is the majority of the work in the end going to be focused there?
>>If most of the code ends up landing there, then its probably not worth
>>splitting?
>>
>>Thanks,
>>Kevin
>>
>>From: Steven Dake (stdake) [std...@cisco.com]
>>Sent: Monday, May 02, 2016 6:05 AM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>>On 5/1/16, 10:32 PM, "Swapnil Kulkarni"  wrote:
>>
>>>On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
>>> wrote:
 Although it seems I'm in the minority, I am in favor of unified repo.

 From: "Steven Dake (stdake)" 
 Reply-To: "OpenStack Development Mailing List (not for usage
questions)"
 
 Date: Sunday, May 1, 2016 at 5:03 PM
 To: "OpenStack Development Mailing List (not for usage questions)"
 
 Subject: [openstack-dev] [kolla][kubernetes] One repo vs two

 Ryan had rightly pointed out that when we made the original proposal
9am
 morning we had asked folks if they wanted to participate in a separate
 repository.

 I don't think a separate repository is the correct approach based upon
one
 off private conversations with folks at summit.  Many people from that
list
 approached me and indicated they would like to see the work integrated
in
 one repository as outlined in my vote proposal email.  The reasons I
heard
 were:

 Better integration of the community
 Better integration of the code base
 Doesn't present an us vs them mentality that one could argue happened
during
 kolla-mesos
 A second repository makes k8s a second class citizen deployment
architecture
 without a voice in the full deployment methodology
 Two gating methods versus one
 No going back to a unified repository while preserving git history

 I favor of the separate repositories I heard

 It presents a unified workspace for kubernetes alone
 Packaging without ansible is simpler as the ansible directory need not
be
 deleted

 There were other complaints but not many pros.  

Re: [openstack-dev] [kolla][vote][kubernetes][infra] kolla-kubernetes repository management proposal up for vote

2016-05-02 Thread Martin André
+1 for kolla-kubernetes.

On Mon, May 2, 2016 at 1:07 PM, Mauricio Lima  wrote:
> Just to clarify my vote.
>
> +1 for single repository
>
> 2016-05-02 14:11 GMT-03:00 Jeff Peeler :
>>
>> Also +1 for working on kolla-kubernetes. (Please read this thread if
>> you haven't yet):
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093575.html
>>
>> On Mon, May 2, 2016 at 10:56 AM, Ryan Hallisey 
>> wrote:
>> > +1 to start kolla-kubernetes work.
>> >
>> > - Original Message -
>> > From: "Swapnil Kulkarni" 
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > 
>> > Sent: Monday, May 2, 2016 12:59:40 AM
>> > Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
>> > kolla-kubernetes repository management proposal up for vote
>> >
>> > On Mon, May 2, 2016 at 10:08 AM, Vikram Hosakote (vhosakot)
>> >  wrote:
>> >> A separate repo will land us in the same spot as we had with
>> >> kolla-mesos
>> >> originally.  We had all kinds of variance in the implementation.
>> >>
>> >> I’m in favor of a single repo.
>> >>
>> >> +1 for the single repo.
>> >>
>> >
>> > I agree with you Vikram, but we should consider the bootstrapping
>> > requirements for new deployment technologies and learn from our
>> > failures with kolla-mesos.
>> >
>> > At the same time, it will help us evaluate the deployment technologies
>> > going ahead without distrupting the kolla repo which we can treat as a
>> > repo with stable images & associated deployment tools.
>> >
>> >> Regards,
>> >> Vikram Hosakote
>> >> IRC: vhosakot
>> >>
>> >> From: Vikram Hosakote 
>> >> Date: Sunday, May 1, 2016 at 11:36 PM
>> >>
>> >> To: "OpenStack Development Mailing List (not for usage questions)"
>> >> 
>> >> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
>> >> kolla-kubernetes repository management proposal up for vote
>> >>
>> >> Please add me too to the list!
>> >>
>> >> Regards,
>> >> Vikram Hosakote
>> >> IRC: vhosakot
>> >>
>> >> From: Michał Jastrzębski 
>> >> Reply-To: "OpenStack Development Mailing List (not for usage
>> >> questions)"
>> >> 
>> >> Date: Saturday, April 30, 2016 at 9:58 AM
>> >> To: "OpenStack Development Mailing List (not for usage questions)"
>> >> 
>> >> Subject: Re: [openstack-dev] [kolla][vote][kubernetes][infra]
>> >> kolla-kubernetes repository management proposal up for vote
>> >>
>> >> Add me too please Steven.
>> >>
>> >> On 30 April 2016 at 09:50, Steven Dake (stdake) 
>> >> wrote:
>> >>
>> >> Fellow core reviewers,
>> >>
>> >> We had a fantastic turnout at our fishbowl kubernetes as an underlay
>> >> for
>> >> Kolla session.  The etherpad documents the folks interested and
>> >> discussion
>> >> at summit[1].
>> >>
>> >> This proposal is mostly based upon a combination of several discussions
>> >> at
>> >> open design meetings coupled with the kubernetes underlay discussion.
>> >>
>> >> The proposal (and what we are voting on) is as follows:
>> >>
>> >> Folks in the following list will be added to a kolla-k8s-core group.
>> >>
>> >>   This kolla-k8s-core group will be responsible for code reviews and
>> >> code
>> >> submissions to the kolla repository for the /kubernetes top level
>> >> directory.
>> >> Individuals in kolla-k8s-core that consistently approve (+2) or
>> >> disapprove
>> >> with a (-2) votes to TLD directories other then kubernetes will be
>> >> handled
>> >> on a case by case basis with several "training warnings" followed by
>> >> removal
>> >> of the kolla-k8s-core group.  The kolla-k8s-core group will be added as
>> >> a
>> >> subgroup of the kolla-core reviewer team, which means they in effect
>> >> have
>> >> all of the ACL access as the existing kolla repository.  I think it is
>> >> better in this case to trust these individuals to the right thing and
>> >> only
>> >> approve changes for the kubernetes directory until proposed for the
>> >> kolla-core reviewer group where they can gate changes to any part of
>> >> the
>> >> repository.
>> >>
>> >> Britt Houser
>> >>
>> >> mark casey
>> >>
>> >> Steven Dake (delta-alpha-kilo-echo)
>> >>
>> >> Michael Schmidt
>> >>
>> >> Marian Schwarz
>> >>
>> >> Andrew Battye
>> >>
>> >> Kevin Fox (kfox)
>> >>
>> >> Sidharth Surana (ssurana)
>> >>
>> >>   Michal Rostecki (mrostecki)
>> >>
>> >>Swapnil Kulkarni (coolsvap)
>> >>
>> >>MD NADEEM (mail2nadeem92)
>> >>
>> >>Vikram Hosakote (vhosakot)
>> >>
>> >>Jeff Peeler (jpeeler)
>> >>
>> >>Martin Andre (mandre)
>> >>
>> >>Ian Main (Slower)
>> >>
>> >> Hui Kang (huikang)
>> >>
>> >> Serguei Bezverkhi (sbezverk)
>> >>
>> >> Alex Polvi (polvi)
>> >>
>> >> Rob Mason
>> >>
>> >> Alicja Kwasniewska
>> >>
>> >> sean mooney (sean-k-mooney)

Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Fox, Kevin M
+1 to one set of containers for all. If kolla-k8s needs tweaks to the abi, the 
request should go to the kolla core team (involving everyone) and discuss why 
they are needed/reasonable. This should be done regardless of if there are 1or 
2 repo's in the end.

Thanks,
Kevin

From: Steven Dake (stdake) [std...@cisco.com]
Sent: Monday, May 02, 2016 11:14 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two

I personally would like to see one set of defaults files for the default
config and merging thereof. (the stuff in roles/*/defaults).

There would be overlap there.

A lot of the overlap involves things like reno, sphinx, documentation,
gating, etc.

During kolla-emsos, separate containers (IIRC) were made, separate start
extension scripts were made, and to my dismay a completely different ABI
was implemented.

We need one ABI to the containers and that should be laid out in the spec
if it isn't already.

Regards
-steve


On 5/2/16, 10:31 AM, "Ryan Hallisey"  wrote:

>Most of the code is not an overlap. We will preserve the ABI while
>customizing the ansible config generation (if we do end up using it). We
>can use some of what's in kolla as a starting point.
>
>I'd say the code overlap is a bootstrapping point for the project.
>
>-Ryan
>
>- Original Message -
>From: "Kevin M Fox" 
>To: "OpenStack Development Mailing List (not for usage questions)"
>
>Sent: Monday, May 2, 2016 12:56:22 PM
>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>
>One thing we didn't talk about too much at the summit is the part of the
>spec that says we will reuse a bunch of ansible stuff to generate configs
>for the k8s case...
>
>Do we believe that code would be minimal and not impact separate repo's
>much or is the majority of the work in the end going to be focused there?
>If most of the code ends up landing there, then its probably not worth
>splitting?
>
>Thanks,
>Kevin
>
>From: Steven Dake (stdake) [std...@cisco.com]
>Sent: Monday, May 02, 2016 6:05 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [kolla][kubernetes] One repo vs two
>
>On 5/1/16, 10:32 PM, "Swapnil Kulkarni"  wrote:
>
>>On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
>> wrote:
>>> Although it seems I'm in the minority, I am in favor of unified repo.
>>>
>>> From: "Steven Dake (stdake)" 
>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>>questions)"
>>> 
>>> Date: Sunday, May 1, 2016 at 5:03 PM
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>>>
>>> Ryan had rightly pointed out that when we made the original proposal
>>>9am
>>> morning we had asked folks if they wanted to participate in a separate
>>> repository.
>>>
>>> I don't think a separate repository is the correct approach based upon
>>>one
>>> off private conversations with folks at summit.  Many people from that
>>>list
>>> approached me and indicated they would like to see the work integrated
>>>in
>>> one repository as outlined in my vote proposal email.  The reasons I
>>>heard
>>> were:
>>>
>>> Better integration of the community
>>> Better integration of the code base
>>> Doesn't present an us vs them mentality that one could argue happened
>>>during
>>> kolla-mesos
>>> A second repository makes k8s a second class citizen deployment
>>>architecture
>>> without a voice in the full deployment methodology
>>> Two gating methods versus one
>>> No going back to a unified repository while preserving git history
>>>
>>> I favor of the separate repositories I heard
>>>
>>> It presents a unified workspace for kubernetes alone
>>> Packaging without ansible is simpler as the ansible directory need not
>>>be
>>> deleted
>>>
>>> There were other complaints but not many pros.  Unfortunately I failed
>>>to
>>> communicate these complaints to the core team prior to the vote, so now
>>>is
>>> the time for fixing that.
>>>
>>> I'll leave it open to the new folks that want to do the work if they
>>>want to
>>> work on an offshoot repository and open us up to the possible problems
>>> above.
>>>
>>> If you are on this list:
>>>
>>> Ryan Hallisey
>>> Britt Houser
>>>
>>> mark casey
>>>
>>> Steven Dake (delta-alpha-kilo-echo)
>>>
>>> Michael Schmidt
>>>
>>> Marian Schwarz
>>>
>>> Andrew Battye
>>>
>>> Kevin Fox (kfox)
>>>
>>> Sidharth Surana (ssurana)
>>>
>>>  Michal Rostecki (mrostecki)
>>>
>>>   Swapnil Kulkarni (coolsvap)
>>>
>>>   MD NADEEM (mail2nadeem92)
>>>
>>>   Vikram Hosakote (vhosakot)
>>>
>>>   Jeff Peeler (jpeeler)
>>>
>>>   Martin Andre (mandre)
>>>
>>> 

[openstack-dev] [oslo] A bunch of releases

2016-05-02 Thread Joshua Harlow
Just wanted to let people know about the new proposed releases (since 
there wasn't a IRC meeting today):


If folks think these need updates or version number changes please 
comment on the review (or here).


https://review.openstack.org/#/c/311812/ (the release proposals)

https://review.openstack.org/#/c/311813/ (the uc update)

Those should hopefully go through fine (assuming the depends-on reviews 
they need go through quickly/soon),


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Steven Dake (stdake)
Jeff,

What you propose is reasonable, but the timeline to make all that long
term vision happen is time consuming and we want to get rolling now, not
in t-4 to 6 weeks after we can sort out a kolla-docker and kolla-ansible
split.

FWIW It will make backporting a serious painful experience, and I am
totally not in favor of doing any type of splitting of docker and ansible
until the core team is fully comfortable with maintaining stable backports.

Further I want the core team to understand how gating works (and that
happens by doing).  Gating experience will come in this cycle.

By doing these two things we could possibly have a split repo in as little
as a week if 1 person weren't responsible for all of the work.  To get
there takes training on backporting and gating, which I expect people will
learn well over the next cycle.

Regards
-steve

On 5/2/16, 11:23 AM, "Jeff Peeler"  wrote:

>On Sun, May 1, 2016 at 5:03 PM, Steven Dake (stdake) 
>wrote:
>> I don't think a separate repository is the correct approach based upon
>>one
>> off private conversations with folks at summit. Many people from that
>>list
>> approached me and indicated they would like to see the work integrated
>>in
>> one repository as outlined in my vote proposal email.  The reasons I
>>heard
>> were:
>>
>> Better integration of the community
>> Better integration of the code base
>> Doesn't present an us vs them mentality that one could argue happened
>>during
>> kolla-mesos
>> A second repository makes k8s a second class citizen deployment
>>architecture
>> without a voice in the full deployment methodology
>> Two gating methods versus one
>> No going back to a unified repository while preserving git history
>>
>> I favor of the separate repositories I heard
>>
>> It presents a unified workspace for kubernetes alone
>> Packaging without ansible is simpler as the ansible directory need not
>>be
>> deleted
>>
>> There were other complaints but not many pros.  Unfortunately I failed
>>to
>> communicate these complaints to the core team prior to the vote, so now
>>is
>> the time for fixing that.
>
>I favor the repository split, but the reason being is that I think
>Ansible along with Kubernetes should each be a separate repository.
>Keeping a monolithic repository is the opposite of the "Unix
>philosophy". It was even recommended at one point to split every
>single service into a separate repository [1].
>
>Repository management, backports, and additional gating are all things
>that I'll admit are more work with more than one single repository.
>However, the ease of ramping up where everything is separated out
>makes it worth it in my opinion. I believe the success of a given
>community is partially due to proper delineation of expertise
>(otherwise, why not put all OpenStack services in one gigantic repo?).
>I'm echoing this comment somebody said at the summit: stretching the
>core team across every orchestration tool is not scalable. I'm really
>hoping more projects will grow around the Kolla ecosystem and can do
>so without being required to become proficient with every other
>orchestration system.
>
>One argument for keeping a single repository is to compare to the
>mesos effort (that has stopped now) in a different repository. But as
>it has already been said, mesos should have been given fairness with
>ansible split out as well. If everything were in a single repository,
>it has been suggested that the community will review more. However, I
>don't personally believe that with gerrit in use that affects things
>at all. OpenStack even has a gerrit dashboard creator [2], but I think
>developers are capable enough at easily finding what they want to
>consistently review.
>
>As I said in a previous reply [3], I don't think git history should
>affect this decision as we can make it work in either scenario. ACL
>permissions seem overly complicated to be in the same repository, even
>if we can arrange for a feature branch to have different permissions
>from the main repo.
>
>My views here are definitely focused on the long term view. If any
>short term plans can be made to allow ourselves to eventually align
>with having separate repositories, I don't think I'd have a problem
>with that. However, I thought the Ansible code was supposed to have
>been separated out a long time ago. This is a natural inflection point
>to change policy and mode of operating, which is why I don't enjoy the
>idea of waiting any longer. Luckily, having Ansible in the same
>repository currently does not inhibit any momentum with Kubernetes in
>a separate repository.
>
>As far as starting the repositories split and then merging them in the
>future (assuming Ansible also stays in one repo), I don't know why we
>would want that. But perhaps after the Kubernetes effort has
>progressed we can better determine if that makes sense with a clear
>view of what the project files actually end up looking like. I don't
>think that any 

Re: [openstack-dev] [Nova] Libvirt version requirement

2016-05-02 Thread Michael Still
On Sun, May 1, 2016 at 10:27 PM, ZhiQiang Fan  wrote:

> Hi Nova cores,
>
> There is a spec[1] submitted to Telemetry project for Newton release,
> mentioned that a new feature requires libvirt >= 1.3.4 , I'm not sure if
> this will have bad impact to Nova service, so I open this thread and wait
> for your opinions.
>

Thanks for the heads up.

Nova currently has one feature that requires libvirt 1.3.0 or greater. That
said, I don't think we're particularly opposed to newer libvirts, its just
that they're not very widely deployed.

That said, that spec is very terse and could do with some more detail. That
would help people decide where to go from here.

Cheers,
Michael


> [1]: https://review.openstack.org/#/c/311655/
>
> Thanks!
> ZhiQiang Fan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Britt Houser (bhouser)
I think several of the people who have expressed support of split repo have 
given the caveat that they want it integrated once it matures.  I know that 
merging repos at a future date without losing history is a major drawback to 
this approach.  What if instead of separate repo, we just had a "k8s" branch 
with periodic (weekly?) syncs from master?  That would allow easy merge of git 
history at the point that k8s meats the "stable" requirement.  Would having a 
separate branch in the same repo give the kolla-k8s-core the independence 
desired for quick development without infringing on master?  Is it possible in 
gerrit for kolla-k8s-core have +2 on k8s branch and not master?  Just food for 
thought.

Thx,
britt




On 5/2/16, 1:32 AM, "Swapnil Kulkarni"  wrote:

>On Mon, May 2, 2016 at 9:54 AM, Britt Houser (bhouser)
> wrote:
>> Although it seems I'm in the minority, I am in favor of unified repo.
>>
>> From: "Steven Dake (stdake)" 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Sunday, May 1, 2016 at 5:03 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: [openstack-dev] [kolla][kubernetes] One repo vs two
>>
>> Ryan had rightly pointed out that when we made the original proposal 9am
>> morning we had asked folks if they wanted to participate in a separate
>> repository.
>>
>> I don't think a separate repository is the correct approach based upon one
>> off private conversations with folks at summit.  Many people from that list
>> approached me and indicated they would like to see the work integrated in
>> one repository as outlined in my vote proposal email.  The reasons I heard
>> were:
>>
>> Better integration of the community
>> Better integration of the code base
>> Doesn't present an us vs them mentality that one could argue happened during
>> kolla-mesos
>> A second repository makes k8s a second class citizen deployment architecture
>> without a voice in the full deployment methodology
>> Two gating methods versus one
>> No going back to a unified repository while preserving git history
>>
>> I favor of the separate repositories I heard
>>
>> It presents a unified workspace for kubernetes alone
>> Packaging without ansible is simpler as the ansible directory need not be
>> deleted
>>
>> There were other complaints but not many pros.  Unfortunately I failed to
>> communicate these complaints to the core team prior to the vote, so now is
>> the time for fixing that.
>>
>> I'll leave it open to the new folks that want to do the work if they want to
>> work on an offshoot repository and open us up to the possible problems
>> above.
>>
>> If you are on this list:
>>
>> Ryan Hallisey
>> Britt Houser
>>
>> mark casey
>>
>> Steven Dake (delta-alpha-kilo-echo)
>>
>> Michael Schmidt
>>
>> Marian Schwarz
>>
>> Andrew Battye
>>
>> Kevin Fox (kfox)
>>
>> Sidharth Surana (ssurana)
>>
>>  Michal Rostecki (mrostecki)
>>
>>   Swapnil Kulkarni (coolsvap)
>>
>>   MD NADEEM (mail2nadeem92)
>>
>>   Vikram Hosakote (vhosakot)
>>
>>   Jeff Peeler (jpeeler)
>>
>>   Martin Andre (mandre)
>>
>>   Ian Main (Slower)
>>
>> Hui Kang (huikang)
>>
>> Serguei Bezverkhi (sbezverk)
>>
>> Alex Polvi (polvi)
>>
>> Rob Mason
>>
>> Alicja Kwasniewska
>>
>> sean mooney (sean-k-mooney)
>>
>> Keith Byrne (kbyrne)
>>
>> Zdenek Janda (xdeu)
>>
>> Brandon Jozsa (v1k0d3n)
>>
>> Rajath Agasthya (rajathagasthya)
>> Jinay Vora
>> Hui Kang
>> Davanum Srinivas
>>
>>
>>
>> Please speak up if you are in favor of a separate repository or a unified
>> repository.
>>
>> The core reviewers will still take responsibility for determining if we
>> proceed on the action of implementing kubernetes in general.
>>
>> Thank you
>> -steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>I am in the favor of having two separate repos and evaluating the
>merge/split option later.
>Though in the longer run, I would recommend having a single repo with
>multiple stable deployment tools(maybe too early to comment views but
>yeah)
>
>Swapnil
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Interest in contributing to OpenStack

2016-05-02 Thread Thierry Carrez

Djimeli Konrad wrote:

My name is Djimeli Konrad a second year computer science student from
the University of Buea Cameroon, Africa. I am GSoC 2015 participant,
and I have also worked on  some open-source projects on github
(http://github.com/djkonro) and sourceforge
(https://sourceforge.net/u/konrado/profile/). I am very passionate
about cloud development, distributed systems, virtualization and I
would like to contribute to OpenStack.
[...]


Hi Konrad,

Welcome to the OpenStack Community ! It may take a bit for the Glance 
project team to answer to your email as a lot of us were traveling to 
Austin last week for the OpenStack Summit. I'm sure they will get back 
to you as soon as possible.


Regards,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Could you tell me the best way to develop openstack in Windows?

2016-05-02 Thread 박준하
Hi forks,

 

I’m beginner about Openstack developments. I’ve been really interested in
to know about Openstack and wanted to modify and test it.

 

But I’m using PyCharm in the Windows 7 but, It’s very hard to programming
and testing when I tried to change code little bit.

 

I hope you guys are the best developer about opensources especially
Openstack. Could tell me your environment for developing on the Windows? 

 

 

Thanks.

 

 

Jun-ha, Park

Freshman of Cloud Technical Group KINX corp., Korea

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Requirements for becoming approved official project

2016-05-02 Thread Thierry Carrez

Shinobu Kinjo wrote:

I guess, it's usable. [1] [2] [3], probably and more...

The reason why still I can just guess is that there is a bunch of
documentations!!
It's one of great works but too much.


We have transitioned most of the documentation off the wiki, but there 
are still a number of pages that are not properly deprecated.



[1] https://wiki.openstack.org/wiki/PTL_Guide


This is now mostly replaced by the project team guide, so I marked this 
one as deprecated.


As far as initial election goes, if there is a single candidate no need 
to organize a formal election. If you need to run one, you can use CIVS 
(http://civs.cs.cornell.edu/) since that is what we use for the official 
elections: https://wiki.openstack.org/wiki/Election_Officiating_Guidelines


Regards,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] amphora flavour and load balancer topology

2016-05-02 Thread Michael Johnson
Hi Lingxian,

For #1, we create a nova flavor for the amphora in the devstack
plugin.  It is currently:
nova flavor-create --is-public False m1.amphora
${OCTAVIA_AMP_FLAVOR_ID} 1024 2 1

I have not done extensive testing with these settings to optimize it.
We were shooting for the minimum viable config to run the gate tests.
However, I expect over time we have reduced the requirements and you
might be able to get away with slightly less now.

On #2, we would like to enable neutron flavors to support the
selection of topology.  For example, "bronze" flavor may be
standalone, "silver" would be active/standby.  However, that
capability is not yet implemented.  Feel free to take that on if you
have the cycles... grin.

Michael



On Mon, May 2, 2016 at 2:40 PM, Lingxian Kong  wrote:
> Hi, octavia guys,
>
> Recently, We have been thinking about deploying octavia into
> production (pre-production first in our case). Here are some
> questions/concerns that need your suggestions and feekback.
>
> 1. Octavia will use a default instance flavour, which is
> pre-configured, to create amphorae. Do you have any suggestion about
> how to choose the appropriate amphora flavor? or did someone have the
> load balancer performance test with different amphora flavors? It's so
> important because we need to charge users according to the flavor.
>
> 2. Currently, Octavia support 2 topologies for load balancer,
> SINGLE/ACTIVE-STANDBY, which is also configured by deployer. Is there
> any possibility for the users to decide which one to use? e.g. adding
> a param in creating load balancer. I believe it's different SLA for
> the 2 topologies, some users can afford the failure when using SINGLE
> mode, just because they want to be charged less.
>
> Regards!
> ---
> Lingxian Kong
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Updates, upcoming events and more awareness

2016-05-02 Thread Nikhil Komawar
Hello everyone,

It was great to meet you (those who were) at the summit and missed those
who couldn't come. So, for those who are waiting for updates I wanted to
share some awareness of what's coming in the next few weeks.

* I will be sending summaries of individual discussion items from the
summit, over the next few days
* The Glance weekly meetings will continue this week onward i.e. our
next meeting is on Thursday May 5th
* The Glare weekly meetings will tentatively resume May 16th unless
someone needs one earlier
* The Glance mid-cycle will be June 15-17. Please update your *final*
RSVP by Wednesday May 18th [1]
* We will be having monthly as well as ad-hoc virtual sync on import
refactor work. The team will also tentatively meet on IRC to discuss
just import implementation 30 mins prior to Glance weekly meetings i.e.
at 1330 UTC on Thursdays starting May 12th on #openstack-glance (time:
subject to change)
* We will be having weekly 30 mins sync on Nova v1, v2 work prior to
Glare weekly meetings i.e. at 1700 UTC on Mondays starting May 16th on
#openstack-glance
* We will be having syncs on image sharing stuff whenever there's no
Nova v1, v2 or import meeting planned, at those respective times
* The above three syncs are informal and for those who are actively
participating or have a strong say in the work. You know who you are, so
please attend without fail (barring exceptions of course)
* You can expect a proposed review from me against the glance-specs repo
later this week that describes Newton priorities and related work
* Glance processes for specs, lite-specs, freeze, review cadence, etc.
need to be finalized by EOW next week i.e. "Friday the 13th" of May. If
you have a strong say in what the process should be, you need to
organize a meeting at least 3-4 days before so that we all can come to
an agreement. If I don't see any momentum on that by EOW this week, I
will come up with a process myself
* I will propose a review to update the dates for the respective events
after the Glance weekly meeting this week, so that they show up on [2]

Finally, if you need any more updates feel free to reach out to me
either via email, IRC or reply here.

[1] https://etherpad.openstack.org/p/newton-glance-midcycle-meetup
[2] http://releases.openstack.org/newton/schedule.html

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [glare] [heat] [tosca] [tacker] [murano] [magnum] [app-catalog] Austin summit summary: Generic cataloging and Glare v1 API

2016-05-02 Thread Nikhil Komawar
Added a few more tags to the subject line.

On 5/2/16 7:05 PM, Nikhil Komawar wrote:
> Hello everyone,
>
> Just wanted to send a brief summary of the discussions at the summit.
> This list is not holistic however, it covers the relevant aspects that
> various stakeholders need to be aware of.
>
>   * Glare is useful for different use cases in OpenStack including
> currently being asked for in Heat, Murano and TOSCA
>   * Heat needs something for usage in Newton
>   * Murano needs the stable API to adapt the changes as they currently
> use experimental version
>   * Glance team will continue to make progress on this effort and plan
> to have POC after Newton R-16 [1]
>   * The initial plan is to focus on base artifact (no data asset
> associated) and then support at least one artifact type
>   * The first artifact can be Murano application catalogs or Heat
> templates depending on either team's priorities when Glare is ready
> for consumption
>   * In Newton, we will focus on the adoption of this service in at least
> the above mentioned two projects and getting the API in good shape
>   * Images compatibility is deferred for now
>   * Glare will be a side-priority for Newton meaning most of the cores
> are currently not expected to prioritize reviews on it except for
> those who want to focus on cross project initiatives and those
> involved in its adoption
>
> For more information please reach out to me on #openstack-glance, email,
> reply here etc.
>
> [1] http://releases.openstack.org/newton/schedule.html
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Token providers and Fernet as the default

2016-05-02 Thread Clint Byrum
Hello! I enjoyed very much listening in on the default token provider
work session last week in Austin, so thanks everyone for participating
in that. I did not speak up then, because I wasn't really sure of this
idea that has been bouncing around in my head, but now I think it's the
case and we should consider this.

Right now, Keystones without fernet keys, are issuing UUID tokens. These
tokens will be in the database, and valid, for however long the token
TTL is.

The moment that one changes the configuration, keystone will start
rejecting these tokens. This will cause disruption, and I don't think
that is fair to the users who will likely be shown new bugs in their
code at a very unexpected moment.

I wonder if one could merge UUID and Fernet into a provider which
handles this transition gracefully:

if self._fernet_keys:
  return self._issue_fernet_token()
else:
  return self._issue_uuid_token()

And in the validation, do the same, but also with an eye toward keeping
the UUID tokens alive:

if self._fernet_keys:
  try:
self._validate_fernet_token()
  except InvalidFernetFormatting:
self._validate_uuid_token()
else:
  self._validate_uuid_token()

So that while one is rolling out new keystone nodes and syncing fernet
keys, all tokens issued would validated properly, with minimal extra
cost to support both (basically just a number of UUID tokens will need
to be parsed twice, once as Fernet, and once as UUID).

Thoughts? I think doing this would make changing the default fairly
uncontroversial.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Timeframe for naming the P release?

2016-05-02 Thread Brian Haley

On 05/02/2016 02:53 PM, Shamail Tahir wrote:

Hi everyone,

When will we name the P release of OpenStack?  We named two releases
simultaneously (Newton and Ocata) during the Mitaka release cycle.  This gave us
the names for the N (Mitaka), N+1 (Newton), and N+2 (Ocata) releases.

If we were to vote for the name of the P release soon (since the location is now
known) we would be able to have names associated with the current release cycle
(Newton), N+1 (Ocata), and N+2 (P).  This would also allow us to get back to
only voting for one name per release cycle but consistently have names for N,
N+1, and N+2.


Is there really going to be an option besides Plymouth?  I remember something 
important happened there in 1620 ;-)


https://en.wikipedia.org/wiki/Plymouth,_Massachusetts

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] - suggested development workflow without ./rejoin-stack.sh ?

2016-05-02 Thread Kevin Benton
This patch removed the ./rejoin-stack.sh script:
https://review.openstack.org/#/c/291453/

I relied on this heavily in my development VM which sees lots of restarts
because of various things (VM becomes unresponsive in load testing, my
laptop has a kernel panic, etc). Normally this was not a big deal because I
could ./rejoin-stack.sh and pick up where I left off (all db objects,
virtual interfaces, instance images, etc all intact).

Now am I correct in understanding that when this happens there is no way to
restart the services in a simple manner without blowing away everything and
starting over? Unless I'm missing some way to run ./stack.sh without losing
previous state, this seems like a major regression (went from mostly
working ./rejoin-stack.sh to nothing).

What is the recommended way to use devstack without being a power outage
away from losing hours of work?

Thanks,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Coupler] Austin Summit vBrownBag slide

2016-05-02 Thread Zhipeng Huang
Hi,

For those of you who are interested, you could find our slide at
http://www.slideshare.net/zhipengh/storage-is-not-virtualized-enough-part-2-lets-do-service-chaining


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] - suggested development workflow without ./rejoin-stack.sh ?

2016-05-02 Thread Timothy Symanczyk
Thanks for asking this, I would also like to know what the “real” answer is.

But what I’ve found myself doing recently, since it was unwise to 100% count on 
rejoin-stack.sh, was to “do” everything via scripts and take periodic backups 
of same. You’re still only one power outage away from losing your state, but 
ideally a clean install and running the same scripts again could mostly 
recover. With that said most of what I’d been working on has a noop result if 
the same command is repeated multiple times (like “openstack role create”), so 
slowly appending to the script and just rerunning it in its entirety was a 
completely fine workflow. If you’re doing stuff where that’s not the case, this 
will not be a good solution.

Tim

From: Kevin Benton >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, May 2, 2016 at 3:10 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [devstack] - suggested development workflow without 
./rejoin-stack.sh ?

This patch removed the ./rejoin-stack.sh script: 
https://review.openstack.org/#/c/291453/

I relied on this heavily in my development VM which sees lots of restarts 
because of various things (VM becomes unresponsive in load testing, my laptop 
has a kernel panic, etc). Normally this was not a big deal because I could 
./rejoin-stack.sh and pick up where I left off (all db objects, virtual 
interfaces, instance images, etc all intact).

Now am I correct in understanding that when this happens there is no way to 
restart the services in a simple manner without blowing away everything and 
starting over? Unless I'm missing some way to run ./stack.sh without losing 
previous state, this seems like a major regression (went from mostly working 
./rejoin-stack.sh to nothing).

What is the recommended way to use devstack without being a power outage away 
from losing hours of work?

Thanks,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] - suggested development workflow without ./rejoin-stack.sh ?

2016-05-02 Thread Shinobu Kinjo
Honestly those kind of major changes have to be discussed very carefully.

On Tue, May 3, 2016 at 7:10 AM, Kevin Benton  wrote:
> This patch removed the ./rejoin-stack.sh script:
> https://review.openstack.org/#/c/291453/
>
> I relied on this heavily in my development VM which sees lots of restarts
> because of various things (VM becomes unresponsive in load testing, my
> laptop has a kernel panic, etc). Normally this was not a big deal because I
> could ./rejoin-stack.sh and pick up where I left off (all db objects,
> virtual interfaces, instance images, etc all intact).
>
> Now am I correct in understanding that when this happens there is no way to
> restart the services in a simple manner without blowing away everything and
> starting over? Unless I'm missing some way to run ./stack.sh without losing
> previous state, this seems like a major regression (went from mostly working
> ./rejoin-stack.sh to nothing).
>
> What is the recommended way to use devstack without being a power outage
> away from losing hours of work?
>
> Thanks,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Email:
shin...@linux.com
GitHub:
shinobu-x
Blog:
Life with Distributed Computational System based on OpenSource

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Michał Jastrzębski
Well, I don't think we should rely on ansible's config generation. We
can't really as it's wired into ansible too much. jinja2 templates in
Dockerfiles aren't connected to ansible in any way and are perfectly
reusable.

On 2 May 2016 at 16:13, Qiu Yu  wrote:
> On Mon, May 2, 2016 at 12:38 PM, Steven Dake (stdake) 
> wrote:
>>
>> Yup but that didn't happen with kolla-mesos and I didn't catch it until 2
>> weeks after it was locked in stone.  At that point I asked for the ABI to
>> be unified to which I got a "shrug" and no action.
>>
>> If it has been in one repo, everyone would have seen the multiple ABIs and
>> rejected the patch in the first place.
>>
>> FWIW I am totally open to extending the ABI however is necessary to make
>> Kolla containers be the reference that other projects use for their
>> container deployment technology tooling.  In this case the ABI was
>> extended without consultation and without repair after the problem was
>> noticed.
>
>
> ABI has been mentioned a lot in either this thread or the spec code review.
> Does it refer to container image only, or does it cover other part like
> jinja2
> template for config generation as well?
>
> That is the part I think need more clarification. Because even though we
> treat
> Kubernetes as just another deployment tool, but if it still relies on
> Ansible to
> generate configuations (as proposed in the spec[1]), then there's no clean
> way
> to centralize all Kube related stuff in separate repo.
>
> If we're going to re-use Kolla's jinja2 templates and ini merging (which is
> heavily depends on Ansible module as of now), I think practically it is
> easiser
> to bootstrap Kubernetes stuff in the same Kolla repo. But other than that,
> I'm
> in favor of separate Kolla-kubernetes repo.
>
> [1] https://review.openstack.org/#/c/304182
>
> QY
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [glare] [heat] [tosca] [tacker] Austin summit summary: Generic cataloging and Glare v1 API

2016-05-02 Thread Nikhil Komawar
Hello everyone,

Just wanted to send a brief summary of the discussions at the summit.
This list is not holistic however, it covers the relevant aspects that
various stakeholders need to be aware of.

  * Glare is useful for different use cases in OpenStack including
currently being asked for in Heat, Murano and TOSCA
  * Heat needs something for usage in Newton
  * Murano needs the stable API to adapt the changes as they currently
use experimental version
  * Glance team will continue to make progress on this effort and plan
to have POC after Newton R-16 [1]
  * The initial plan is to focus on base artifact (no data asset
associated) and then support at least one artifact type
  * The first artifact can be Murano application catalogs or Heat
templates depending on either team's priorities when Glare is ready
for consumption
  * In Newton, we will focus on the adoption of this service in at least
the above mentioned two projects and getting the API in good shape
  * Images compatibility is deferred for now
  * Glare will be a side-priority for Newton meaning most of the cores
are currently not expected to prioritize reviews on it except for
those who want to focus on cross project initiatives and those
involved in its adoption

For more information please reach out to me on #openstack-glance, email,
reply here etc.

[1] http://releases.openstack.org/newton/schedule.html

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [glare] Austin summit summary: Image sharing, community images, hierarchical image access, sharing in Glare

2016-05-02 Thread Nikhil Komawar
Hello everyone,

Just wanted to send a brief summary of the discussions at the summit.
This list is not holistic however, it covers the relevant aspects that
various stakeholders need to be aware of.

  * Image sharing in it's current state is inadequate for the
requirement of sharing an Image with anyone who want's to use it.
This can be done using the community images concept
  * Current Glance Image sharing implementation needs some work to clean
up visibility semantics to be able to move forward with community images
  * At the Friday's contributors' meetup, a proposal was made that
Glance could implement ACL logic that indicates "public", "private",
"shared" & next gen "community", "inherited" visibility by keeping
each of these as boolean variables and thus independent from each
other. Of course, this needs more research and is worth
investigating. Timothy (from Symantec) seemed interested to work on this
  * In a hierarchical keystone setup, currently the only way for a child
project to see the parent image is if that project has been added to
the members associated with that Image in Glance. This has a risk of
getting things out of sync and requirement exists for one switch
setup where the inherited image can be shared or unshared for all
concerned members using one or two calls. Visibility semantics need
to be revisited for the same. Research needs to be done on what
other OpenStack services prefer when it comes to hierarchical access
(to be done by me)
  * Glare will defer the sharing implementation for now. This will be
revisited when the open questions in Glance have been resolved.
Hence, this is not a priority for Glare in Newton

For more information please reach out to me on #openstack-glance, email,
reply here etc.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Categorization (previously centralization) of config options

2016-05-02 Thread Nikhil Komawar
Hello everyone,

Just wanted to send a brief summary of the discussions at the summit.
This list is not holistic however, it covers the relevant aspects that
various stakeholders need to be aware of.

  * The current proposal is for centralizing config options, improving
help text, adding dependency (relative mappings) between multiple
config options & categorizing of the config options

  * Proposal is to first centralize config options in order to make it
easier for improving the help text and increasing visibility of the
config options. It is easier for code reviewers to review the
proposed later changes as well

  * People who want to clarify the help text (over the period of time)
will be deployers, admins, operators or even install guide or
operators' guide technical writers. So, it will be easier for
centralizing the config options to help them make the right relative
assessment of the configs

  * Having a good real world example of a good config will encourage
future developers to write good documentation/help text of the
config options they intend to introduce

  * The current config options are sometimes local, sometimes defined
far from where they are actually used.  So centralizing them will
not make this situation much worse.  At the same time,
centralization will encourage the good aspects indicated above. Thus
the effort to centralize config options seems worthwhile

For more information please reach out to me on #openstack-glance, email,
reply here etc.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Qiu Yu
On Mon, May 2, 2016 at 12:38 PM, Steven Dake (stdake) 
wrote:

> Yup but that didn't happen with kolla-mesos and I didn't catch it until 2
> weeks after it was locked in stone.  At that point I asked for the ABI to
> be unified to which I got a "shrug" and no action.
>
> If it has been in one repo, everyone would have seen the multiple ABIs and
> rejected the patch in the first place.
>
> FWIW I am totally open to extending the ABI however is necessary to make
> Kolla containers be the reference that other projects use for their
> container deployment technology tooling.  In this case the ABI was
> extended without consultation and without repair after the problem was
> noticed.


ABI has been mentioned a lot in either this thread or the spec code review.
Does it refer to container image only, or does it cover other part like
jinja2
template for config generation as well?

That is the part I think need more clarification. Because even though we
treat
Kubernetes as just another deployment tool, but if it still relies on
Ansible to
generate configuations (as proposed in the spec[1]), then there's no clean
way
to centralize all Kube related stuff in separate repo.

If we're going to re-use Kolla's jinja2 templates and ini merging (which is
heavily depends on Ansible module as of now), I think practically it is
easiser
to bootstrap Kubernetes stuff in the same Kolla repo. But other than that,
I'm
in favor of separate Kolla-kubernetes repo.

[1] https://review.openstack.org/#/c/304182

QY
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-05-02 Thread Assaf Muller
On Mon, May 2, 2016 at 3:18 PM, Gal Sagie  wrote:
> Maybe it can help if instead of trying to define criteria to which projects
> dont fit into
> the stadium, try to define in your spec what IT IS, and for what purpose its
> there.

Well said. This came up multiple times in the Gerrit discussion(s) a
couple of months back. The design summit discussion highlighted again
that people don't know what the stadium is (The usage of the word
'supported' or 'official' really demonstrates this. Supported by
who?).

The stadium is not a lot more than perception at this point. What I'd
like to do is realign the terminology we use to describe networking
projects.

Can we consider a stadium project as an 'assisted' project, and a
project outside of the stadium as an 'independent' project? I'd like
to avoid using terms that reflect poorly on projects outside of the
stadium, and reverse the situation: Let's refer to projects outside
the stadium using either a neutral or a positive word, and be
consistent about using that word in any public facing document. I
would normally avoid debating semantics, but since these days the
stadium is more about perception than anything else, I think we should
focus on semantics and explaining what the stadium exactly is.

Another thing we should ask ourselves is the existence of the stadium.
The idea that the Neutron team can vouch for a project is insane to
me. Neutron cores cannot vouch for ODL, only ODL developers can vouch
for ODL. For my money, Neutron cores currently do not vouch for
anything other than Neutron anyway, so this is just about reflecting
reality, not performing any real changes. The only sticking point I
see (And here I definitely agree with Armando) from a governance point
of view is the ability to have control over OpenStack's networking API
(It's conceivable to have the people with control over OpenStack's
networking API to be different than the current group of Neutron
cores). If we're OK going forward with no centralized place to manage
networking projects, we're also OK with having no control over the
API, and the danger here is to allow new projects (Think SFC and
SFC-new 6 months later) that solve similar use cases to define their
own API. That seems counter productive to OpenStack's longevity. One
way to resolve this is to go forward with Armando's suggestion to have
a centralized location to discuss and approve new APIs. I'm not sure
we must enforce the API declarations via some technical mechanism, it
might be possible to have everything on paper instead and assume
people are generally decent. Note that control over the API and the
stadium are essentially two independent problems, it may be convenient
to tackle both under the stadium discussion, but it's not necessary.

>
>
> On Mon, May 2, 2016 at 8:53 PM, Kyle Mestery  wrote:
>>
>> On Mon, May 2, 2016 at 12:22 PM, Armando M.  wrote:
>> >
>> >
>> > On 30 April 2016 at 14:24, Fawad Khaliq  wrote:
>> >>
>> >> Hi folks,
>> >>
>> >> Hope everyone had a great summit in Austin and got back safe! :)
>> >>
>> >> At the design summit, we had a Neutron stadium evolution session, which
>> >> needs your immediate attention as it will impact many stakeholders of
>> >> Neutron.
>> >
>> >
>> > It's my intention to follow up with a formal spec submission to
>> > neutron-specs as soon as I recover from the trip. Then you'll have a
>> > more
>> > transparent place to voice your concern.
>> >
>> >>
>> >>
>> >> To summarize for everyone, our Neutron leadership made the following
>> >> proposal for the “greater-good” of Neutron to improve and reduce burden
>> >> on
>> >> the Neutron PTL and core team to avoid managing more Neutron drivers:
>> >
>> >
>> > It's not just about burden. It's about consistency first and foremost.
>> >
>> >>
>> >>
>> >> Quoting the etherpad [1]
>> >>
>> >> "No request for inclusion are accepted for projects focussed solely on
>> >> implementations and/or API extensions to non-open solutions."
>> >
>> >
>> > By the way, this was brought forward and discussed way before the
>> > Summit. In
>> > fact this is already implemented at the Neutron governance level [1].
>> >
>> >>
>> >> To summarize for everyone what this means is that all Neutron drivers,
>> >> which implement non open source networking backends are instantly out
>> >> of the
>> >> Neutron stadium and are marked as "unofficial/unsupported/remotely
>> >> affiliated" and rest are capable of being tagged as
>> >> "supported/official”.
>> >
>> >
>> > Totally false.
>> >
>> > All this means is that these projects do not show up in list [1] (minus
>> > [2],
>> > which I forgot): ie. these projects are the projects the Neutron team
>> > vouches for. Supportability is not a property tracked by this list. You,
>> > amongst many, should know that it takes a lot more than being part of a
>> > list
>> > to be considered a supported solution, and I am actually even surprised

Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-05-02 Thread Doug Wiegley
Were we looking at the same etherpad?  I think the ‘inclusion criteria’ and 
‘benefits of the proposal’ sections cover those two points. Are you referring 
to something else?

Thanks,
doug


> On May 2, 2016, at 12:18 PM, Gal Sagie  wrote:
> 
> Maybe it can help if instead of trying to define criteria to which projects 
> dont fit into
> the stadium, try to define in your spec what IT IS, and for what purpose its 
> there.
> 
> 
> On Mon, May 2, 2016 at 8:53 PM, Kyle Mestery  > wrote:
> On Mon, May 2, 2016 at 12:22 PM, Armando M.  > wrote:
> >
> >
> > On 30 April 2016 at 14:24, Fawad Khaliq  > > wrote:
> >>
> >> Hi folks,
> >>
> >> Hope everyone had a great summit in Austin and got back safe! :)
> >>
> >> At the design summit, we had a Neutron stadium evolution session, which
> >> needs your immediate attention as it will impact many stakeholders of
> >> Neutron.
> >
> >
> > It's my intention to follow up with a formal spec submission to
> > neutron-specs as soon as I recover from the trip. Then you'll have a more
> > transparent place to voice your concern.
> >
> >>
> >>
> >> To summarize for everyone, our Neutron leadership made the following
> >> proposal for the “greater-good” of Neutron to improve and reduce burden on
> >> the Neutron PTL and core team to avoid managing more Neutron drivers:
> >
> >
> > It's not just about burden. It's about consistency first and foremost.
> >
> >>
> >>
> >> Quoting the etherpad [1]
> >>
> >> "No request for inclusion are accepted for projects focussed solely on
> >> implementations and/or API extensions to non-open solutions."
> >
> >
> > By the way, this was brought forward and discussed way before the Summit. In
> > fact this is already implemented at the Neutron governance level [1].
> >
> >>
> >> To summarize for everyone what this means is that all Neutron drivers,
> >> which implement non open source networking backends are instantly out of 
> >> the
> >> Neutron stadium and are marked as "unofficial/unsupported/remotely
> >> affiliated" and rest are capable of being tagged as "supported/official”.
> >
> >
> > Totally false.
> >
> > All this means is that these projects do not show up in list [1] (minus [2],
> > which I forgot): ie. these projects are the projects the Neutron team
> > vouches for. Supportability is not a property tracked by this list. You,
> > amongst many, should know that it takes a lot more than being part of a list
> > to be considered a supported solution, and I am actually even surprised that
> > you are misled/misleading by bringing 'support' into this conversation.
> >
> > [1] http://governance.openstack.org/reference/projects/neutron.html 
> > 
> > [2] https://review.openstack.org/#/c/309618/ 
> > 
> >
> >>
> >>
> >> This eliminates all commercial Neutron drivers developed for many service
> >> providers and enterprises who have deployed OpenStack successfully with
> >> these drivers. It’s unclear how the OpenStack Foundation will communicate
> >> its stance with all the users but clearly this is a huge set back for
> >> OpenStack and Neutron. Neutron will essentially become closed to all
> >> existing, non-open drivers, even if these drivers have been compliant with
> >> Neutron API for years and users have them deployed in production, forcing
> >> users to re-evaluate their options.
> >
> >
> > Again, totally false.
> >
> > The Neutron team will continue to stand behind the APIs and integration
> > mechanisms in a way that made the journey of breaking down the codebase as
> > we know it today possible. Any discussion of evolving these has been done
> > and will be done in the open and with the support of all parties involved,
> > non-open solutions included.
> >
> >>
> >>
> >> Furthermore, this proposal will erode confidence in Neutron and OpenStack,
> >> and destroy much of the value that the community has worked so hard to 
> >> build
> >> over the years.
> >>
> >>
> >> As a representative and member of the OpenStack community and maintainer
> >> of a Neutron driver (since Grizzly), I am deeply disappointed and disagree
> >> with this statement [2]. Tossing out all the non-open solutions is not in
> >> the best interest of the end user companies that have built working
> >> OpenStack clusters. This proposal will lead OpenStack end users who 
> >> deployed
> >> different drivers to think twice about OpenStack communities’ commitment to
> >> deliver solutions they need. Furthermore, this proposal punishes OpenStack
> >> companies who developed commercial backend drivers to help end users bring
> >> up OpenStack clouds.
> >
> >
> > What? Now you're just spreading FUD.
> >
> > What is being discussed in that etherpad is totally in line with [1], which
> > you 

Re: [openstack-dev] [kolla][kubernetes] One repo vs two

2016-05-02 Thread Adam Young

On 05/01/2016 05:03 PM, Steven Dake (stdake) wrote:
Ryan had rightly pointed out that when we made the original proposal 
9am morning we had asked folks if they wanted to participate in a 
separate repository.


In Keystone, we are going to more and more repositories all the time.  
We started with everything in Keystone server, then split out the 
python-keystoneclient repo, then keystonemiddleware, and now 
keystoneauth.  Kerberos requires a separate auth repo, too, just due to 
package dependencies.  Multiple repos are not a bad thing.  The Policy 
store, and  the drivers behind identity are all candidates for future 
refactoring.


Splitting a repo is not a big deal, but it is easier to do up front than 
to retool.




I think starting with a separate, but supported repository makes things 
much easier.


Kolla is 2 things:

1.  Creation of Containers for deploying the base openstack services.
2.  Actual deployment of the same

I would argue that the architecture for this should be something like 4 
repos:


1.  Container production.  Assuming a single toolchain here.
2.  Kubernetes deploy
3.  Ansible deploy
4.  Mesos deploy
5.  kolla-deploy-common.

Python in general makes it hard to have more than one upstream library 
from a repo, so you really want to think about what it looks like from 
PyPi first and organize based on that.


If anything can be pulled out into its own repo, it should.

Yeah, it makes development a bit more of a pain, but there are ways to 
mitigate that.  git subprojects might be a painful one, but it is not 
the the only approach.


Over time, I would expect both the Ansible and Kubernetes repos 
themselves to be split into finer repos, with Ansible plugins and 
Kubernetes modules being separately managed.






I don't think a separate repository is the correct approach based upon 
one off private conversations with folks at summit.  Many people from 
that list approached me and indicated they would like to see the work 
integrated in one repository as outlined in my vote proposal email. 
 The reasons I heard were:


  * Better integration of the community
  * Better integration of the code base
  * Doesn't present an us vs them mentality that one could argue
happened during kolla-mesos
  * A second repository makes k8s a second class citizen deployment
architecture without a voice in the full deployment methodology
  * Two gating methods versus one
  * No going back to a unified repository while preserving git history

I favor of the separate repositories I heard

  * It presents a unified workspace for kubernetes alone
  * Packaging without ansible is simpler as the ansible directory need
not be deleted

There were other complaints but not many pros.  Unfortunately I failed 
to communicate these complaints to the core team prior to the vote, so 
now is the time for fixing that.


I'll leave it open to the new folks that want to do the work if they 
want to work on an offshoot repository and open us up to the possible 
problems above.


If you are on this list:

  * Ryan Hallisey
  * Britt Houser

  * mark casey

  * Steven Dake (delta-alpha-kilo-echo)

  * Michael Schmidt

  * Marian Schwarz

  * Andrew Battye

  * Kevin Fox (kfox)

  * Sidharth Surana (ssurana)

  *  Michal Rostecki (mrostecki)

  *   Swapnil Kulkarni (coolsvap)

  *   MD NADEEM (mail2nadeem92)

  *   Vikram Hosakote (vhosakot)

  *   Jeff Peeler (jpeeler)

  *   Martin Andre (mandre)

  *   Ian Main (Slower)

  * Hui Kang (huikang)

  * Serguei Bezverkhi (sbezverk)

  * Alex Polvi (polvi)

  * Rob Mason

  * Alicja Kwasniewska

  * sean mooney (sean-k-mooney)

  * Keith Byrne (kbyrne)

  * Zdenek Janda (xdeu)

  * Brandon Jozsa (v1k0d3n)

  * Rajath Agasthya (rajathagasthya)
  * Jinay Vora
  * Hui Kang
  * Davanum Srinivas



Please speak up if you are in favor of a separate repository or a 
unified repository.


The core reviewers will still take responsibility for determining if 
we proceed on the action of implementing kubernetes in general.


Thank you
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] Zaqar meeting cancelled this week

2016-05-02 Thread Fei Long Wang
Hi team,

We cancelled the meeting today as most of the team is either OOO or in vacation 
after summit. We may also cancel the next meeting if there is no topic raised. 
Thanks.

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] amphora flavour and load balancer topology

2016-05-02 Thread Lingxian Kong
Hi, octavia guys,

Recently, We have been thinking about deploying octavia into
production (pre-production first in our case). Here are some
questions/concerns that need your suggestions and feekback.

1. Octavia will use a default instance flavour, which is
pre-configured, to create amphorae. Do you have any suggestion about
how to choose the appropriate amphora flavor? or did someone have the
load balancer performance test with different amphora flavors? It's so
important because we need to charge users according to the flavor.

2. Currently, Octavia support 2 topologies for load balancer,
SINGLE/ACTIVE-STANDBY, which is also configured by deployer. Is there
any possibility for the users to decide which one to use? e.g. adding
a param in creating load balancer. I believe it's different SLA for
the 2 topologies, some users can afford the failure when using SINGLE
mode, just because they want to be charged less.

Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Could you tell me the best way to develop openstack in Windows?

2016-05-02 Thread Anita Kuno
On 05/02/2016 04:20 AM, 박준하 wrote:
> Hi forks,
> 
>  
> 
> I’m beginner about Openstack developments. I’ve been really interested in
> to know about Openstack and wanted to modify and test it.
> 
>  
> 
> But I’m using PyCharm in the Windows 7 but, It’s very hard to programming
> and testing when I tried to change code little bit.
> 
>  
> 
> I hope you guys are the best developer about opensources especially
> Openstack. Could tell me your environment for developing on the Windows? 
> 
>  
> 
>  
> 
> Thanks.
> 
>  
> 
>  
> 
> Jun-ha, Park
> 
> Freshman of Cloud Technical Group KINX corp., Korea
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

The optimum environment for developing open source code consists of open
source tools.

The majority of the open source development community uses open source
tools for development. The community is best able to offer support for
use of open source tools.

If you need help accessing, and finding ways to learn how to use, open
source tools, please do ask.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >