[openstack-dev] [neutron] no CI or upgrades meetings this week

2017-05-07 Thread Ihar Hrachyshka
And no CI meetings in next two weeks after the summit because of no planned
attendance from main participants.

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] How to mount ISO file to a openstack instance ?

2017-05-07 Thread Warad, Manjunath (Nokia - SG/Singapore)
Hi,

Yes, Glance can be used to manage ISO file and this can be configured to shared 
by multiple instances as is.

But as far as I know, this ISO image cannot be used to boot an instance.

Regards,
Manjunath

From: don...@ahope.com.cn [mailto:don...@ahope.com.cn]
Sent: Sunday, 7 May, 2017 10:12 PM
To: openstack 
Subject: [Openstack] How to mount ISO file to a openstack instance ?

Hi all,

i want to know how to mount ISO file to a openstack instance ? Can it be 
managed by Glance ? And can an iso image be shared by multiple instances?


=
董 建 华
地址:杭州滨江区南环路3766号新世纪办公楼
邮编:310053
手机:13857132818
总机:0571-28996000
传真:0571-28996001
热线:4006728686
网址:www.ahope.com.cn
Email:don...@ahope.com.cn
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] nova-api response time

2017-05-07 Thread Matt Riedemann

On 5/5/2017 9:00 AM, Vladimir Prokofev wrote:

Writing things down always helps.
So I dug a little further, and seems that keystone is the weakest link.
It appears that it takes 500-600ms to generate a token, similar time to
validate it. This means that first time application accesses the API
request will run at least 1000ms, or even more. Second time will be
significantly faster because you no longer need to request a token, and
validation is done via memcached.
But for some reason cached token expires very fast, in a matter of
minutes(I didn't actually measured exact time), so at some point
validation will once again take half a second.
If your app written in a bad way, i.e. for every request it creates a
new auth session, this means that it will run very very slowly, as every
request will take half a second to get a token.

So now it's a question of determining why keystone is so slow? Btw, I
use keystone 10.0.0 with fernet tokens and apache WSGI(5 processes, 1
thread) on three controller nodes behind haproxy.
I found this article[1] about keystone benchmarking, and it seems that
it can run significantly faster.

So once again:
 - what are your average times for token creation/validation?
 - is this procedure computationally extensive and requires faster
CPU(fernet implies encryption/decription)?
 - appreciate a link on a good doc on how to check and increases
memcached caching timers.

By the way, I think this topic now should be named "openstack API
response times in general", rather than strictly tie to nova-api.

[1] 
https://docs.openstack.org/developer/rally/overview/stories/keystone/authenticate.html

2017-05-05 13:38 GMT+03:00 Vladimir Prokofev >:

Hello Ops.

So I had a feeling that my Openstack API is running a bit slow, like
it takes a couple of seconds to get a full list of all instances in
Horizon for example, which is kind of a lot for a basic MySQL
select-and-present job. This got me wondering to how can I measure
execution times and find out how long it takes for any part of the
API request to execute.

So I got a first REST client I could find(it happens to be a Google
Chrome application called ARC(Advanced REST Client)), decided how
and what I want to measure and got to it.

I'm doing a basic /servers request to public nova-api endpoint using
X-Auth-Token I got earlier from keystone. Project that I pull has
only single instance in it.
From clients perspective there're 4 major parts of request:
connection time, send time, wait time, and receive time.
Connection time, send time, and receive time are all negligible, as
they take about 5-10ms combined. The major part is wait time, which
can take from 130ms to 2500ms.
Wait time is directly correlated to the execution time that I can
see in nova-api log, with about 5-10ms added lag, which is expected
and perfectly normal as I understand it.

First run after some time is always the slowest one. and takes from
900ms to 2500ms. If I do the same request in the next few minutes it
will be faster - 130 to 200ms. My guess is that there's some caching
involved(memcached?).

So my questions are:
 - what execution times do you usually see in your environment for
nova-api, or other APIs for that matter? I have a feeling that even
130ms is very slow for such a simple task;
 - I assumed that first request is so slow because it pulls data
from MySQL which resides on slow storage(3 node Galera cluster, each
node is a KVM domain based on 2 SATA 7200 HDD in software RAID1), so
I enabled slow query logs for MySQL with 500ms threshold, but I see
nothing, even when wait time > 2000ms. Seems MySQL has nothing to do
with it;
 - one more idea is that it takes a lot of time for keystone to
validate a token. With expired token I get 401 response in 500-600ms
first time, and 30-40ms next time I do a request;
 - another idea about slow storage is that nova-api has to write
something(a log for example), before sending response, but I don't
know how to test this exactly, rather that just transfer my
controller nodes on an SSD;
 - is there a way I can profile nova-api, and see exactly which
method takes the most time?

I understand that I didn't exactly described my setup. This is due
to the fact that I'm more interested in learning how to profile API
components, rather than just plainly fix my setup(e.g. make it
faster). In the end as I see it it's either a storage related
issue(fixed by transferring controllers and database to a fast
storage like SSD), or computing power issue(fixed by placing
controllers on a dedicated bare-metal with fast CPU/memory). I have
a desire not to guess, but to know exactly which is it :)
I will describe setup in detail if such request is made.





[openstack-dev] [keystone] session etherpads

2017-05-07 Thread Lance Bragstad
Hey all,

We have a couple sessions to start off the week and I wanted to send out
the links to the etherpads [0] [1] [2].

Let me know if you have any questions. Otherwise feel free to catch up or
pre-populate the etherpads with content if you have any.

Thanks!



[0] https://etherpad.openstack.org/p/BOS-forum-consumable-keystone
[1]
https://etherpad.openstack.org/p/BOS-forum-next-steps-for-rbac-and-policy
[2] https://etherpad.openstack.org/p/BOS-forum-keystone-operator-feedback
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-midonet] networking-midonet core reviewer proposal

2017-05-07 Thread Takashi Yamamoto
as i haven't heard any objections, i went ahead and added them to the list. [1]

[1] https://review.openstack.org/#/admin/groups/607,members

On Wed, Apr 26, 2017 at 12:07 PM, Ryu Ishimoto  wrote:
>
> +1 !
>
> On Wed, Apr 26, 2017 at 11:41 AM Takashi Yamamoto 
> wrote:
>>
>> unless anyone objects, i'll add the following people to
>> networking-midonet project's core reviewers.
>>
>> - Antonio Ojea
>> - Brandon Berg
>> - Xavi León
>>
>> http://stackalytics.com/report/contribution/networking-midonet/30
>> http://stackalytics.com/report/contribution/networking-midonet/90
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn] metadata agent implementation

2017-05-07 Thread Michael Still
It would be interesting for this to be built in a way where other endpoints
could be added to the list that have extra headers added to them.

For example, we could end up with something quite similar to EC2 IAMS if we
could add headers on the way through for requests to OpenStack endpoints.

Do you think the design your proposing will be extensible like that?

Thanks,
Michael




On Fri, May 5, 2017 at 10:07 PM, Daniel Alvarez Sanchez  wrote:

> Hi folks,
>
> Now that it looks like the metadata proposal is more refined [0], I'd like
> to get some feedback from you on the driver implementation.
>
> The ovn-metadata-agent in networking-ovn will be responsible for
> creating the namespaces, spawning haproxies and so on. But also,
> it must implement most of the "old" neutron-metadata-agent functionality
> which listens on a UNIX socket and receives requests from haproxy,
> adds some headers and forwards them to Nova. This means that we can
> import/reuse big part of neutron code.
>
> I wonder what you guys think about depending on neutron tree for the
> agent implementation despite we can benefit from a lot of code reuse.
> On the other hand, if we want to get rid of this dependency, we could
> probably write the agent "from scratch" in C (what about having C
> code in the networking-ovn repo?) and, at the same time, it should
> buy us a performance boost (probably not very noticeable since it'll
> respond to requests from local VMs involving a few lookups and
> processing simple HTTP requests; talking to nova would take most
> of the time and this only happens at boot time).
>
> I would probably aim for a Python implementation reusing/importing
> code from neutron tree but I'm not sure how we want to deal with
> changes in neutron codebase (we're actually importing code now).
> Looking forward to reading your thoughts :)
>
> Thanks,
> Daniel
>
> [0] https://review.openstack.org/#/c/452811/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-07 Thread Flavio Percoco

On 07/05/17 11:59 -0400, Doug Hellmann wrote:

Excerpts from Flavio Percoco's message of 2017-05-07 09:49:41 -0400:

On 05/05/17 08:45 -0400, Sean Dague wrote:
>On 05/04/2017 01:10 PM, Flavio Percoco wrote:
>
>> Some of the current TC activities depend on the meeting to some extent:
>>
>> * We use the meeting to give the final ack on some the formal-vote reviews.
>> * Some folks (tc members and not) use the meeting agenda to know what they
>>  should be reviewing.
>> * Some folks (tc members and not) use the meeting as a way to review or
>>  paticipate in active discussions.
>> * Some folks use the meeting logs to catch up on what's going on in the TC
>>
>> In the resolution that has been proposed[1], we've listed possible
>> solutions for
>> some of this issues and others:
>>
>> * Having office hours
>> * Sending weekly updates (pulse) on the current reviews and TC discussions
>>
>> Regardless we do this change on one-shot or multiple steps (or don't do
>> it at
>> all), I believe it requires changing the way TC activities are done:
>>
>> * It requires folks (especially TC members) to be more active on reviewing
>>  governance patches
>> * It requires folks to engage more on the mailing list and start more
>>  discussions there.
>>
>> Sending this out to kick off a broader discussion on these topics.
>> Thoughts?
>> Opinions? Objections?
>
>To baseline: I am all in favor of an eventual world to get rid of the TC
>IRC meeting (and honestly IRC meetings in general), for all the reasons
>listed above.
>
>I shut down my IRC bouncer over a year ago specifically because I think
>that the assumption of being on IRC all the time is an anti pattern that
>we should be avoiding in our community.
>
>But, that being said, we have a working system right now, one where I
>honestly can't remember the last time we had an IRC meeting get to every
>topic we wanted to cover and not run into the time limit. That is data
>that these needs are not being addressed in other places (yet).
>
>So the concrete steps I would go with is:
>
>1) We need to stop requiring IRC meetings as part of meeting the Open
>definition.
>
>That has propagated this issue a lot -
>https://review.openstack.org/#/c/462077
>
>2) We really need to stop putting items like the project adds.
>
>That's often forcing someone up in the middle of the night for 15
>minutes for no particularly good reason.

We've been doing this because it is a requirement in our process but yeah, we
can change this.

>3) Don't do interactive reviews in gerrit.
>
>Again, kind of a waste of time that is better in async. It's mostly
>triggered by the fact that gerrit doesn't make a good discussion medium
>in looking at broad strokes. It's really good about precision feedback,
>but broad strokes, it's tough.
>
>One counter suggestion here is to have every governance patch that's not
>trivial require that an email come to the list tagged [tc] [governance]
>for people to comment more free form here.

I've mentioned this a gazillion of times and I believe it just keeps going
unheard. I think this should be the *default* and I don't think requiring a
thread to be started is enough. I think we can be more proactive and start
threads ourselves when one is needed. The reason is that in "heated" patches
there can be different topics and we might need multiple-threads for some
patches. There's a lot that will have to be done to keep these emails on track.

>4) See what the impact of the summary that Chris is sending out does to
>make people feel like they understand what is going on in the meeting.
>Because I also think that we make assumptions that the log of the
>meeting describes what really happened. And I think that's often an
>incorrect assumption. The same words used by Monty, Thierry, Jeremy mean
>different things. Which you only know by knowing them all as people.
>Having human interpretation of the meeting is good an puts together a
>more ingestible narrative for people.

I disagree! I don't think we make those assumptions, which is why Anne and
myself worked on those blog posts summarizing what had been going on in the TC.
Those posts stopped but I think we should start working on them already. I've
pinged cdent and I think he's up to work with me on this. cdent yay/nay ?

>
>Then evaluate because we will know that we need the meeting less (or
>less often) when we're regularly ending in 45 minutes, or 30 minutes,
>instead of slamming up against the wall with people feeling they had
>more to say.

TBH, I'm a bit frustrated. what you've written here looks a lot to what's in the
resolution and what I've been saying except that the suggestion is to not shut
meetings down right away but evaluate what happens and then shut them down, or
not, which is fine.

My problem with this is that we *need* everyone in the TC to *actually* change
the way they work on their TC tasks. We need to be more proactive in reviews
that *are not* in the meeting agenda, we need to engage more frequently 

Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-07 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-05-07 09:49:41 -0400:
> On 05/05/17 08:45 -0400, Sean Dague wrote:
> >On 05/04/2017 01:10 PM, Flavio Percoco wrote:
> >
> >> Some of the current TC activities depend on the meeting to some extent:
> >>
> >> * We use the meeting to give the final ack on some the formal-vote reviews.
> >> * Some folks (tc members and not) use the meeting agenda to know what they
> >>  should be reviewing.
> >> * Some folks (tc members and not) use the meeting as a way to review or
> >>  paticipate in active discussions.
> >> * Some folks use the meeting logs to catch up on what's going on in the TC
> >>
> >> In the resolution that has been proposed[1], we've listed possible
> >> solutions for
> >> some of this issues and others:
> >>
> >> * Having office hours
> >> * Sending weekly updates (pulse) on the current reviews and TC discussions
> >>
> >> Regardless we do this change on one-shot or multiple steps (or don't do
> >> it at
> >> all), I believe it requires changing the way TC activities are done:
> >>
> >> * It requires folks (especially TC members) to be more active on reviewing
> >>  governance patches
> >> * It requires folks to engage more on the mailing list and start more
> >>  discussions there.
> >>
> >> Sending this out to kick off a broader discussion on these topics.
> >> Thoughts?
> >> Opinions? Objections?
> >
> >To baseline: I am all in favor of an eventual world to get rid of the TC
> >IRC meeting (and honestly IRC meetings in general), for all the reasons
> >listed above.
> >
> >I shut down my IRC bouncer over a year ago specifically because I think
> >that the assumption of being on IRC all the time is an anti pattern that
> >we should be avoiding in our community.
> >
> >But, that being said, we have a working system right now, one where I
> >honestly can't remember the last time we had an IRC meeting get to every
> >topic we wanted to cover and not run into the time limit. That is data
> >that these needs are not being addressed in other places (yet).
> >
> >So the concrete steps I would go with is:
> >
> >1) We need to stop requiring IRC meetings as part of meeting the Open
> >definition.
> >
> >That has propagated this issue a lot -
> >https://review.openstack.org/#/c/462077
> >
> >2) We really need to stop putting items like the project adds.
> >
> >That's often forcing someone up in the middle of the night for 15
> >minutes for no particularly good reason.
> 
> We've been doing this because it is a requirement in our process but yeah, we
> can change this.
> 
> >3) Don't do interactive reviews in gerrit.
> >
> >Again, kind of a waste of time that is better in async. It's mostly
> >triggered by the fact that gerrit doesn't make a good discussion medium
> >in looking at broad strokes. It's really good about precision feedback,
> >but broad strokes, it's tough.
> >
> >One counter suggestion here is to have every governance patch that's not
> >trivial require that an email come to the list tagged [tc] [governance]
> >for people to comment more free form here.
> 
> I've mentioned this a gazillion of times and I believe it just keeps going
> unheard. I think this should be the *default* and I don't think requiring a
> thread to be started is enough. I think we can be more proactive and start
> threads ourselves when one is needed. The reason is that in "heated" patches
> there can be different topics and we might need multiple-threads for some
> patches. There's a lot that will have to be done to keep these emails on 
> track.
> 
> >4) See what the impact of the summary that Chris is sending out does to
> >make people feel like they understand what is going on in the meeting.
> >Because I also think that we make assumptions that the log of the
> >meeting describes what really happened. And I think that's often an
> >incorrect assumption. The same words used by Monty, Thierry, Jeremy mean
> >different things. Which you only know by knowing them all as people.
> >Having human interpretation of the meeting is good an puts together a
> >more ingestible narrative for people.
> 
> I disagree! I don't think we make those assumptions, which is why Anne and
> myself worked on those blog posts summarizing what had been going on in the 
> TC.
> Those posts stopped but I think we should start working on them already. I've
> pinged cdent and I think he's up to work with me on this. cdent yay/nay ?
> 
> >
> >Then evaluate because we will know that we need the meeting less (or
> >less often) when we're regularly ending in 45 minutes, or 30 minutes,
> >instead of slamming up against the wall with people feeling they had
> >more to say.
> 
> TBH, I'm a bit frustrated. what you've written here looks a lot to what's in 
> the
> resolution and what I've been saying except that the suggestion is to not shut
> meetings down right away but evaluate what happens and then shut them down, or
> not, which is fine.
> 
> My problem with this is that we *need* everyone in the TC 

[openstack-dev] OpenStack Developer Mailing List Digest April 29 - May 5

2017-05-07 Thread Mike Perez
HTML version: 
https://www.openstack.org/blog/2017/05/openstack-developer-mailing-list-digest-20170507/

POST /api-wg/news
=
* Newly Published Guidelines
* Create a set of API interoperability guidelines [1]
* Guidelines Current Under Review
* Microversions: add nextminversion field in version body [2]
* A suite of five documents about version discovery [3]
* Support for historical service type aliases [4]
* WIP: microversion architecture archival document [5]
* Full thread: [6]

Release countdown for week R-16 and R-15 May 8-9

* Focus:
* Pike feature development and completion of release goals.
* Team members attending the Forum at the Boston summit should be focused
  in requirements gathering and collecting feedback from other parts of the
  community.
* Actions:
* Some projects still need to do Ocata stable point release.
* aodh
* barbican
* congress
* designate
* freezer
* glance
* keystone
* manila
* mistral
* sahara
* searchlight
* tricircle
* trove
* zaqar
* Projects following intermediary-release models and haven’t done any:
* aodh
* bitfrost
* ceilometer
* cloud kitty[-dashboard]
* ironic-python-agent
* karbor[-dashboard]
* magnum[-ui]
* murano-agent
* panko
* senlin-dashboard
* solum[-dashboard]
* tacker[-dashboard]
* virtage[-dashboard]
* Independent projects that have not published anything for 2017:
* solum
* bandit
* syntribos
* Upcoming deadlines and dates:
* Forum at OpenStack Summit in Boston: May 8-11
* Pike-2 milestone 2: June 8
* Full thread: [7]

OpenStack moving both too fast and too slow at the same time

* Drew Fisher makes the observation that the user survey [8] shows the same
  issue time and time again on page 18-19.
* Things move too fast
* No LTS release
* Upgrades are scary for anything that isn’t N-1 -> N
* The OpenStack community has reasonable testing in place to ensure
  that N-1 -> N upgrades work.
* Page 18: "Most large customers move slowly and thus are running older
  versions, which are EOL upstream sometimes before they even deploy them."
* We’re unlikely to add more stable releases or work on them longer because:
* We need more people to do the work. It has been difficult to attract
  contributors to this area.
* Find a way to do that work that doesn’t hurt our ability to work on
  master.
* We need older versions of the deployment platforms available in our CI to run
  automated tests.
* Supported version of development tools setup tools and pip.
* Supported versions of the various libraries and system-level dependencies
  like libvirt.
* OpenStack started with no stable branches, where we were producing releases
  and ensuring that updates vaguely worked with N-1 -> N.
* Distributions maintained their own stable branches.
* It was suggested instead of doing duplicate effort, to share a stable
  branch.
* The involvement of distribution packagers became more limited.
* Today it’s just one person, who is currently seeking employment.
* Maintaining stable branches has a cost.
* Complex to ensure that stable branches actually keep working.
* Availability of infrastructure resources.
* OpenStack became more stable, so the demand for longer-term maintenance
  became stronger.
* People expect upstream to provide it, not realizing that upstream is made
  of people employed by various organizations, and apparently this isn’t of
  interest to fund.
* Current stable branch model is kind of useless in only supporting stable
  branches for one year. Two potential outcomes:
* The OpenStack community still thinks there is a lot of value in doing
  this work upstream, in which organizations should invest resources in
  making that happen.
* The OpenStack community thinks this is better handled downstream, and we
  should get rid of them completely.
* For people attending the summit, there will be an on-boarding session for the
  stable team [9]
* Matt Riedemann did a video [10] ether pad [11] and slides [12] on the stable
  work.  In the end, it was determined the cost of doing it didn’t justify the
  dream on, lack of resources to do it.
* Full thread: 13


[1] - https://review.openstack.org/#/c/421846/
[2] - https://review.openstack.org/#/c/446138/
[3] - https://review.openstack.org/#/c/459405/
[4] - https://review.openstack.org/#/c/460654/3
[5] - https://review.openstack.org/444892
[6]  - http://lists.openstack.org/pipermail/openstack-dev/2017-May/116374.html
[7] - http://lists.openstack.org/pipermail/ope

[Openstack] OpenStack Developer Mailing List Digest April 29 - May 5

2017-05-07 Thread Mike Perez
HTML version: 
https://www.openstack.org/blog/2017/05/openstack-developer-mailing-list-digest-20170507/

POST /api-wg/news
=
* Newly Published Guidelines
* Create a set of API interoperability guidelines [1]
* Guidelines Current Under Review
* Microversions: add nextminversion field in version body [2]
* A suite of five documents about version discovery [3]
* Support for historical service type aliases [4]
* WIP: microversion architecture archival document [5]
* Full thread: [6]

Release countdown for week R-16 and R-15 May 8-9

* Focus:
* Pike feature development and completion of release goals.
* Team members attending the Forum at the Boston summit should be focused
  in requirements gathering and collecting feedback from other parts of the
  community.
* Actions:
* Some projects still need to do Ocata stable point release.
* aodh
* barbican
* congress
* designate
* freezer
* glance
* keystone
* manila
* mistral
* sahara
* searchlight
* tricircle
* trove
* zaqar
* Projects following intermediary-release models and haven’t done any:
* aodh
* bitfrost
* ceilometer
* cloud kitty[-dashboard]
* ironic-python-agent
* karbor[-dashboard]
* magnum[-ui]
* murano-agent
* panko
* senlin-dashboard
* solum[-dashboard]
* tacker[-dashboard]
* virtage[-dashboard]
* Independent projects that have not published anything for 2017:
* solum
* bandit
* syntribos
* Upcoming deadlines and dates:
* Forum at OpenStack Summit in Boston: May 8-11
* Pike-2 milestone 2: June 8
* Full thread: [7]

OpenStack moving both too fast and too slow at the same time

* Drew Fisher makes the observation that the user survey [8] shows the same
  issue time and time again on page 18-19.
* Things move too fast
* No LTS release
* Upgrades are scary for anything that isn’t N-1 -> N
* The OpenStack community has reasonable testing in place to ensure
  that N-1 -> N upgrades work.
* Page 18: "Most large customers move slowly and thus are running older
  versions, which are EOL upstream sometimes before they even deploy them."
* We’re unlikely to add more stable releases or work on them longer because:
* We need more people to do the work. It has been difficult to attract
  contributors to this area.
* Find a way to do that work that doesn’t hurt our ability to work on
  master.
* We need older versions of the deployment platforms available in our CI to run
  automated tests.
* Supported version of development tools setup tools and pip.
* Supported versions of the various libraries and system-level dependencies
  like libvirt.
* OpenStack started with no stable branches, where we were producing releases
  and ensuring that updates vaguely worked with N-1 -> N.
* Distributions maintained their own stable branches.
* It was suggested instead of doing duplicate effort, to share a stable
  branch.
* The involvement of distribution packagers became more limited.
* Today it’s just one person, who is currently seeking employment.
* Maintaining stable branches has a cost.
* Complex to ensure that stable branches actually keep working.
* Availability of infrastructure resources.
* OpenStack became more stable, so the demand for longer-term maintenance
  became stronger.
* People expect upstream to provide it, not realizing that upstream is made
  of people employed by various organizations, and apparently this isn’t of
  interest to fund.
* Current stable branch model is kind of useless in only supporting stable
  branches for one year. Two potential outcomes:
* The OpenStack community still thinks there is a lot of value in doing
  this work upstream, in which organizations should invest resources in
  making that happen.
* The OpenStack community thinks this is better handled downstream, and we
  should get rid of them completely.
* For people attending the summit, there will be an on-boarding session for the
  stable team [9]
* Matt Riedemann did a video [10] ether pad [11] and slides [12] on the stable
  work.  In the end, it was determined the cost of doing it didn’t justify the
  dream on, lack of resources to do it.
* Full thread: 13


[1] - https://review.openstack.org/#/c/421846/
[2] - https://review.openstack.org/#/c/446138/
[3] - https://review.openstack.org/#/c/459405/
[4] - https://review.openstack.org/#/c/460654/3
[5] - https://review.openstack.org/444892
[6]  - http://lists.openstack.org/pipermail/openstack-dev/2017-May/116374.html
[7] - http://lists.openstack.org/pipermail/ope

[Openstack] How to mount ISO file to a openstack instance ?

2017-05-07 Thread don...@ahope.com.cn
Hi all,

i want to know how to mount ISO file to a openstack instance ? Can it be 
managed by Glance ? And can an iso image be shared by multiple instances?



=
董 建 华
地址:杭州滨江区南环路3766号新世纪办公楼
邮编:310053
手机:13857132818
总机:0571-28996000
传真:0571-28996001
热线:4006728686
网址:www.ahope.com.cn
Email:don...@ahope.com.cn
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-07 Thread Flavio Percoco

On 05/05/17 08:45 -0400, Sean Dague wrote:

On 05/04/2017 01:10 PM, Flavio Percoco wrote:


Some of the current TC activities depend on the meeting to some extent:

* We use the meeting to give the final ack on some the formal-vote reviews.
* Some folks (tc members and not) use the meeting agenda to know what they
 should be reviewing.
* Some folks (tc members and not) use the meeting as a way to review or
 paticipate in active discussions.
* Some folks use the meeting logs to catch up on what's going on in the TC

In the resolution that has been proposed[1], we've listed possible
solutions for
some of this issues and others:

* Having office hours
* Sending weekly updates (pulse) on the current reviews and TC discussions

Regardless we do this change on one-shot or multiple steps (or don't do
it at
all), I believe it requires changing the way TC activities are done:

* It requires folks (especially TC members) to be more active on reviewing
 governance patches
* It requires folks to engage more on the mailing list and start more
 discussions there.

Sending this out to kick off a broader discussion on these topics.
Thoughts?
Opinions? Objections?


To baseline: I am all in favor of an eventual world to get rid of the TC
IRC meeting (and honestly IRC meetings in general), for all the reasons
listed above.

I shut down my IRC bouncer over a year ago specifically because I think
that the assumption of being on IRC all the time is an anti pattern that
we should be avoiding in our community.

But, that being said, we have a working system right now, one where I
honestly can't remember the last time we had an IRC meeting get to every
topic we wanted to cover and not run into the time limit. That is data
that these needs are not being addressed in other places (yet).

So the concrete steps I would go with is:

1) We need to stop requiring IRC meetings as part of meeting the Open
definition.

That has propagated this issue a lot -
https://review.openstack.org/#/c/462077

2) We really need to stop putting items like the project adds.

That's often forcing someone up in the middle of the night for 15
minutes for no particularly good reason.


We've been doing this because it is a requirement in our process but yeah, we
can change this.


3) Don't do interactive reviews in gerrit.

Again, kind of a waste of time that is better in async. It's mostly
triggered by the fact that gerrit doesn't make a good discussion medium
in looking at broad strokes. It's really good about precision feedback,
but broad strokes, it's tough.

One counter suggestion here is to have every governance patch that's not
trivial require that an email come to the list tagged [tc] [governance]
for people to comment more free form here.


I've mentioned this a gazillion of times and I believe it just keeps going
unheard. I think this should be the *default* and I don't think requiring a
thread to be started is enough. I think we can be more proactive and start
threads ourselves when one is needed. The reason is that in "heated" patches
there can be different topics and we might need multiple-threads for some
patches. There's a lot that will have to be done to keep these emails on track.


4) See what the impact of the summary that Chris is sending out does to
make people feel like they understand what is going on in the meeting.
Because I also think that we make assumptions that the log of the
meeting describes what really happened. And I think that's often an
incorrect assumption. The same words used by Monty, Thierry, Jeremy mean
different things. Which you only know by knowing them all as people.
Having human interpretation of the meeting is good an puts together a
more ingestible narrative for people.


I disagree! I don't think we make those assumptions, which is why Anne and
myself worked on those blog posts summarizing what had been going on in the TC.
Those posts stopped but I think we should start working on them already. I've
pinged cdent and I think he's up to work with me on this. cdent yay/nay ?



Then evaluate because we will know that we need the meeting less (or
less often) when we're regularly ending in 45 minutes, or 30 minutes,
instead of slamming up against the wall with people feeling they had
more to say.


TBH, I'm a bit frustrated. what you've written here looks a lot to what's in the
resolution and what I've been saying except that the suggestion is to not shut
meetings down right away but evaluate what happens and then shut them down, or
not, which is fine.

My problem with this is that we *need* everyone in the TC to *actually* change
the way they work on their TC tasks. We need to be more proactive in reviews
that *are not* in the meeting agenda, we need to engage more frequently in
discussions. Unfortunately, sometimes humans need hard changes to actually
modify the way they do stuff.

Anyway, let's start by removing the requirement on having meetings, the
requirement for rubber stamping reviews and have Thierry 

Re: [openstack-dev] [tc][all] Should the Technical Committee meetings be dropped?

2017-05-07 Thread Flavio Percoco

On 05/05/17 11:22 +0200, Thierry Carrez wrote:

Sean McGinnis wrote:

[...]
But part of my concern to getting rid of the meeting is that I do find it
valuable. The arguments against having it are some of the same I've heard for
our in-person events. It's hard for some to travel to the PTG. There's a lot
of active discussion at the PTG that is definitely a challenge for non-native
speakers to keep up with. But I think we all recognize what value having events
like the PTG provide. Or the Summit/Design Summit/Forum/Midcycle/
pick-your-favorite.


It's a great point. We definitely make faster progress on some reviews
by committing to that one-hour weekly busy segment. I think the
difference with the PTG (or midcycles) is that PTG is a lot more
productive setting than the meeting is, due to increased, face-to-face
bandwidth combined with a flexible schedule. It's also an exceptional
once-per-cycle event, rather than how we conduct business day-to-day.
It's useful to get together and we are very productive when we do, but
that doesn't mean we should all move and live all the time in the same
house to get things done.

I think we have come to rely too much on the weekly meeting. For a lot
of us, it provides a convenient, weekly hour to do TC business, and a
helpful reminder of what things should be reviewed before it. It allows
to conveniently ignore TC business for the rest of the week.
Unfortunately, due to us living on a globe, it happens at an hour that
is a problem for some, and a no-go for others. So that convenience is
paid in the price of other's inconvenience or exclusion. Changing or
rotating the hour just creates more confusion, disruption and misery. So
I think we need to reduce our dependency on that meeting.

We don't have to stop doing meetings entirely. But I think that
day-to-day TC business should be conducted more on the ML and the
reviews, and that meetings should be exceptional. That can be achieved
by posting a weekly pulse email, and only organizing meetings when we
need the additional bandwidth (like if the review and ML threads are not
going anywhere). Then the meeting can be organized at the most
convenient time for the most critical stakeholders, rather than at the
weekly set time that provides the less overall inconvenience. If we need
a meeting to directly discuss a new project team proposed by people
based in Beijing, we should not have that meeting at 4am Beijing time,
and that should be the only meeting topic.


++

Yes, and I wouldn't even call these ad-hoc conversations meetings. Really, it's
more like "logged" conversations. Logging is enabled in every main OpenStack
channel.

The important part is changing the way we interact and work from an TC
perspective. The way it's done currently is *NOT* very friendly for folks that
are not in a US timezone and that are non-English speakers.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [releases] Stable branch conflicting information

2017-05-07 Thread Sean McGinnis
So I noticed today that the release information [0] for Newton appears to have
the wrong date for when Newton transitions to the Legacy Phase. According to
this conversation [1], I think (thought?) we established that rolling over to
each support phase would stay on a 6 month cycle, despite Ocata being a shorter
development cycle.

I am not talking about EOL here, just the transition periods for stable
branches to move to the next phase.

Based on this, the Next Phase for Newton appears to be wrong because it is on
a 6 month period from the Ocata release, not based on Newton's actual release
date.

I was going to put up a patch to fix this, but then got myself really confused
because I couldn't actually reconcile the dates based on how the rest of the
phase information is listed there. Going off of what we state in our Stable
Branch phases [2], we are not following what we have published there.

Based on that information, Mitaka should still be in the Legacy phase, and
not actually EOL'd for another 6 months. (Well, technically that actual EOL
date isn't called out in the documentation, so I'm just assuming another 6
months)

So I'm not proposing we un-EOL Mitaka or change any of our policy. I'm just
pointing out that the information we have in [2] does not appear to be what
we are actually following according to [0], so we should change one or the
other to be consistent. Our EOL dates on the releases page are actually the
dates they should transition to Phase III according to our Stable Branch
"Support Phases" section as it is right now.

Sean

[0] https://releases.openstack.org/
[1] http://lists.openstack.org/pipermail/openstack-dev/2017-February/111910.html
[2] 
https://docs.openstack.org/project-team-guide/stable-branches.html#support-phases

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Horizon | CSRF verification failed

2017-05-07 Thread khansa A. Mohamed
Dear all ,,


We have a new Vmware integrated openstack deployed ,On most
browsers when accessing the https link to horizon when users login they get
the following error:

"Forbidden (403)
CSRF verification failed. Request aborted.
You are seeing this message because this site requires a CSRF cookie when
submitting forms. This cookie is required for security reasons, to ensure
that your browser is not being hijacked by third parties.
If you have configured your browser to disable cookies, please re-enable
them, at least for this site, or for 'same-origin' requests"

can you please help clearing this ?


Thanks in advance






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators