Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-01 Thread Jay Bryant


On 4/1/2017 4:07 PM, Matt Riedemann wrote:

On 4/1/2017 12:17 PM, Jay Bryant wrote:

Matt,

I think discussion on this goes all the way back to Tokyo. There was
work on the Cinder side to send the notification to Nova which I believe
all the pieces were in place for.  The missing part (sticking point) was
doing a rescan of the SCSI bus in the node that had the extended volume
attached.

Has doing that been solved since Tokyo?

Jay



I wasn't in Tokyo so this is all news to me. I don't remember hearing 
about anything like this though.


Ok, I am pretty sure I have notes on this somewhere.  I just need to 
find them.  I will work on doing that as a starting point.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] BOS Forum - User/Dev Feedback session for your project?

2017-04-01 Thread Matt Riedemann

On 3/30/2017 10:28 PM, Tom Fifield wrote:

Hi all,

Forum topic submission closes in 2 days (Sunday 23:59 UTC).

One of the types of topics you could consider submitting is a user/dev
feedback session for your project. I see Swift, Keystone and Kolla have
already done this - thanks!

From experience running the ops meetups, the best user/dev feedback
sessions are co-organised by a leading developer and a prominent user.
The dev can bring burning questions that the project wants answered; the
user is great at eliciting information from the room.

If you're interested in doing something like that in Boston,

http://forumtopics.openstack.org/

welcomes you ... for the next 68 hours.


Regards,


Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


FWIW we've done these for Nova in the past and the attendance is pretty 
low from operators/users and the feedback is generally positive, so we 
stopped doing them. It might just be due to Nova being boring, or more 
interesting sessions happening at the same time, I'm not sure. But 
that's why we don't schedule these types of sessions anymore. I'm sure 
we'll get more questions and feedback in the other sessions we've 
already proposed.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cloudkitty] IRC Meeting

2017-04-01 Thread Christophe Sauthier

Dear OpenStackers,

With Pike developmet being started for some time now, it is a great 
time to have a meeting to coordinate our efforts !


That is why we will have an IRC meeting Tuesday 11th of April 12:30 UTC 
on #cloudkitty.


I hope to see many of you

All the best

   Christophe Sauthier, PTL of Cloudkitty


Christophe Sauthier   Mail : 
christophe.sauth...@objectif-libre.com

CEO   Mob : +33 (0) 6 16 98 63 96
Objectif LibreURL : www.objectif-libre.com
Au service de votre Cloud Twitter : @objectiflibre

Suivez les actualités OpenStack en français en vous abonnant à la Pause 
OpenStack

http://olib.re/pause-openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-01 Thread Matt Riedemann

On 4/1/2017 12:17 PM, Jay Bryant wrote:

Matt,

I think discussion on this goes all the way back to Tokyo.  There was
work on the Cinder side to send the notification to Nova which I believe
all the pieces were in place for.  The missing part (sticking point) was
doing a rescan of the SCSI bus in the node that had the extended volume
attached.

Has doing that been solved since Tokyo?

Jay



I wasn't in Tokyo so this is all news to me. I don't remember hearing 
about anything like this though.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Idempotence of the deployment process

2017-04-01 Thread Fox, Kevin M
At our site, we've seen bugs in idempotence break our system too.

In once case, it was an edge case of the master server going uncontactable at 
just the wrong time for a few seconds, causing the code to (wrongly) believe 
that keys didnt exist and needed to be recreated, then network connectivity was 
re-established and it went on doing its destructive deed.

Similar things have happened on more then one occasion.

So, I've become less enthralled with the idea that you should be doing 
everything all the time, even though it should be idempotent. The more code you 
run, the more likely there will be a bug in it somewhere. Its extremely hard to 
test for all occurrences of these sorts of bugs.

You should carefully weigh the risks/rewards of self healing on each part of 
the system. If an action is only ever done once, like bootstrapping 
credentials, and the effect of "self healing" likely breaks the system anyway, 
its probably better never to do it repeatedly.

Thanks,
Kevin




From: Alex Schultz [aschu...@redhat.com]
Sent: Friday, March 31, 2017 4:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [tripleo] Idempotence of the deployment process

Hey folks,

I wanted to raise awareness of the concept of idempotence[0] and how
it affects deployment(s).  In the puppet world, we consider this very
important because since puppet is all about ensuring a desired state
(ie. a system with config files + services). That being said, I feel
that it is important for any deployment tool to be aware of this.
When the same code is applied to the system repeatedly (as would be
the case in a puppet master deployment) the subsequent runs should
result in no changes if there is no need.  If you take a configured
system and rerun the same deployment code you don't want your services
restarting when the end state is supposed to be the same. In the case
of TripleO, we should be able deploy an overcloud and rerun the
deployment process should result in no configuration changes and 0
services being restarted during the process. The second run should
essentially be a noop.

We have recently uncovered various bugs[1][2][3][4] that have
introduced service disruption due to a lack of idempotency causing
service restarts. So when reviewing or developing new code what is
important about the deployment is to think about what happens if I run
this bit of code twice.  There are a few common items that come up
around idempotency. Things like execs in puppet-tripleo should be
refreshonly or use unless/onlyif to prevent running again if
unnecessary.  Additionally in the TripleO configuration it's important
to understand in which step a service is configured and if it possibly
would get deconfigured in another step.  For example, we configure
apache and some wsgi services in step 3. But we currently configure
some additional wsgi openstack services in step 4 which is resulting
in excessive httpd restarts and possible service unavailability[5]
when updates are applied.

Another important place to understand this concept is in upgrades
where we currently allow for ansible tasks to be used. These should
result in an idempotent action when puppet is subsequently run which
means that the two bits of code essentially need to result in the same
configuration. For example in the nova-api upgrades for Newton to
Ocata we needed to run the same commands[6] that would later be run by
puppet to prevent clashing configurations and possible idempotency
problems.

Idempotency issues can cause service disruptions, longer deployment
times for end users, or even possible misconfigurations.  I think it
might be beneficial to add an idempotency periodic job that is
basically a double run of the deployment process to ensure no service
or configuration changes on the second run. Thoughts?  Ideally one in
the gate would be awesome but I think it would take to long to be
feasible with all the other jobs we currently run.

Thanks,
-Alex

[0] http://binford2k.com/content/2015/10/idempotence-not-just-big-scary-word
[1] https://bugs.launchpad.net/tripleo/+bug/1664650
[2] https://bugs.launchpad.net/puppet-nova/+bug/1665443
[3] https://bugs.launchpad.net/tripleo/+bug/1665405
[4] https://bugs.launchpad.net/tripleo/+bug/1665426
[5] https://review.openstack.org/#/c/434016/
[6] https://review.openstack.org/#/c/405241/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Removing BDM devices from POST requests

2017-04-01 Thread Matt Riedemann
I know we've talked about this over and over and another bug [1] 
reminded me of it. We have long talked about removing the ability to 
specify a block device name when creating a server or attaching a volume 
because we can't honor the requested device name anyway and trying to do 
so just causes issues. That's part of the reason why the libvirt driver 
stopped honoring the block device name in requests back in Liberty [2].


I think we all agree on removing the device name from the API, but I'm 
having a hard time remembering if someone signed up to write a spec for 
this. I could have sworn this came up recently and someone said they'd 
write a spec, but I can't remember.


So this is my attempt at remembering and if it's all a dream, then is 
anyone interested in owning this? If not, I'll start writing the spec 
this week.


[1] https://bugs.launchpad.net/nova/+bug/1648323

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] logs.o.o corrupt - indicated by POST_FAILURE results

2017-04-01 Thread Trinath Somanchi
Now the system is up. Just push a recheck. It must be fine.



Thanks,
Trinath Somanchi  | HSDC, GSD, DN | NXP – Hyderabad –INDIA.


From: ChangBo Guo [mailto:glongw...@gmail.com]
Sent: Saturday, April 01, 2017 7:17 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [all][infra] logs.o.o corrupt - indicated by 
POST_FAILURE results

Thanks for the reminder!

2017-04-01 2:28 GMT+08:00 Andreas Jaeger >:
Since this morning, part of logs.openstack.org is 
corrupt due to a
downtime of one of the backing stores. The infra admins are currently
running fsck and then will take everything back in use.

Right now, we put the logs on a spare disks so that everything that is
run, is getting log results.

You might have received "POST_FAILURE" message on jobs since jobs could
not push their data to logs.o.o.

Once the system is up and running again, feel free to "recheck" your
jobs where you miss log files and see "POST_FAILURE" reports.

For now, please do not recheck to not fill up our temporary disk and
keep load low.

Just a reminder:

You can always check the status of the CI infrastructure via:
* https://wiki.openstack.org/wiki/Infrastructure_Status
* by following twitter http://twitter.com/openstackinfra
* Or checking the topic in IRC

And then report problems via #openstack-infra on IRC.

Andreas
--
 Andreas Jaeger 
aj@{suse.com,opensuse.org} Twitter: 
jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-01 Thread Jay Bryant

Matt,

I think discussion on this goes all the way back to Tokyo.  There was 
work on the Cinder side to send the notification to Nova which I believe 
all the pieces were in place for.  The missing part (sticking point) was 
doing a rescan of the SCSI bus in the node that had the extended volume 
attached.


Has doing that been solved since Tokyo?

Jay

On 4/1/2017 10:34 AM, Matt Riedemann wrote:

On 3/31/2017 8:55 PM, TommyLike Hu wrote:

There was a time when this feature had been both proposed in Cinder [1]
and Nova [2], but unfortunately no one (correct me if I am wrong) is
going to handle this feature during Pike. We do think extending an
online volume is a beneficial and mostly supported by venders feature.
We really don't want this feature missed from OpenStack and would like
to continue on. So anyone could share your knowledge of how many works
are left till now and  where should I start with?

Thanks
TommyLike.Hu

[1] https://review.openstack.org/#/c/272524/
[2]
https://blueprints.launchpad.net/nova/+spec/nova-support-attached-volume-extend 




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The nova blueprint description does not contain much for details, but 
from what is there it sounds a lot of like the existing volume swap 
operation which is triggered from Cinder by a volume migration or 
retype operation. How do those existing operations not already solve 
this use case?





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-01 Thread Matt Riedemann

On 3/31/2017 8:55 PM, TommyLike Hu wrote:

There was a time when this feature had been both proposed in Cinder [1]
and Nova [2], but unfortunately no one (correct me if I am wrong) is
going to handle this feature during Pike. We do think extending an
online volume is a beneficial and mostly supported by venders feature.
We really don't want this feature missed from OpenStack and would like
to continue on. So anyone could share your knowledge of how many works
are left till now and  where should I start with?

Thanks
TommyLike.Hu

[1] https://review.openstack.org/#/c/272524/
[2]
https://blueprints.launchpad.net/nova/+spec/nova-support-attached-volume-extend


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The nova blueprint description does not contain much for details, but 
from what is there it sounds a lot of like the existing volume swap 
operation which is triggered from Cinder by a volume migration or retype 
operation. How do those existing operations not already solve this use case?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] logs.o.o corrupt - indicated by POST_FAILURE results

2017-04-01 Thread Andreas Jaeger
On 2017-04-01 15:28, Davanum Srinivas wrote:
> Andreas,
> 
> looks like we are past the POST_FAILURE's. However the log links seem
> to redirect to a 404.
> 
> example in https://review.openstack.org/#/c/451964/, the log link is:
> http://logs.openstack.org/64/451964/7/check/gate-k8s-cloud-provider-golang-dsvm-conformance-ubuntu-xenial/569f22a/
> 
> which when clicked redirects to the following url which is a 404
> https://docs.openstack.org/infra/system-config/64/451964/7/check/gate-k8s-cloud-provider-golang-dsvm-conformance-ubuntu-xenial/569f22a/

When I send my mail, Jeremy was still running rsync to copy over the
last files...

If anything is still missing, please recheck,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] logs.o.o corrupt - indicated by POST_FAILURE results

2017-04-01 Thread Davanum Srinivas
Never mind :) looks like it was a transient issue from last night.
logs show up correctly with a recheck

Thanks,
Dims

On Sat, Apr 1, 2017 at 9:28 AM, Davanum Srinivas  wrote:
> Andreas,
>
> looks like we are past the POST_FAILURE's. However the log links seem
> to redirect to a 404.
>
> example in https://review.openstack.org/#/c/451964/, the log link is:
> http://logs.openstack.org/64/451964/7/check/gate-k8s-cloud-provider-golang-dsvm-conformance-ubuntu-xenial/569f22a/
>
> which when clicked redirects to the following url which is a 404
> https://docs.openstack.org/infra/system-config/64/451964/7/check/gate-k8s-cloud-provider-golang-dsvm-conformance-ubuntu-xenial/569f22a/
>
> Thanks,
> Dims
>
> On Fri, Mar 31, 2017 at 2:28 PM, Andreas Jaeger  wrote:
>> Since this morning, part of logs.openstack.org is corrupt due to a
>> downtime of one of the backing stores. The infra admins are currently
>> running fsck and then will take everything back in use.
>>
>> Right now, we put the logs on a spare disks so that everything that is
>> run, is getting log results.
>>
>> You might have received "POST_FAILURE" message on jobs since jobs could
>> not push their data to logs.o.o.
>>
>> Once the system is up and running again, feel free to "recheck" your
>> jobs where you miss log files and see "POST_FAILURE" reports.
>>
>> For now, please do not recheck to not fill up our temporary disk and
>> keep load low.
>>
>> Just a reminder:
>>
>> You can always check the status of the CI infrastructure via:
>> * https://wiki.openstack.org/wiki/Infrastructure_Status
>> * by following twitter http://twitter.com/openstackinfra
>> * Or checking the topic in IRC
>>
>> And then report problems via #openstack-infra on IRC.
>>
>> Andreas
>> --
>>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>>HRB 21284 (AG Nürnberg)
>> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] logs.o.o corrupt - indicated by POST_FAILURE results

2017-04-01 Thread Davanum Srinivas
Andreas,

looks like we are past the POST_FAILURE's. However the log links seem
to redirect to a 404.

example in https://review.openstack.org/#/c/451964/, the log link is:
http://logs.openstack.org/64/451964/7/check/gate-k8s-cloud-provider-golang-dsvm-conformance-ubuntu-xenial/569f22a/

which when clicked redirects to the following url which is a 404
https://docs.openstack.org/infra/system-config/64/451964/7/check/gate-k8s-cloud-provider-golang-dsvm-conformance-ubuntu-xenial/569f22a/

Thanks,
Dims

On Fri, Mar 31, 2017 at 2:28 PM, Andreas Jaeger  wrote:
> Since this morning, part of logs.openstack.org is corrupt due to a
> downtime of one of the backing stores. The infra admins are currently
> running fsck and then will take everything back in use.
>
> Right now, we put the logs on a spare disks so that everything that is
> run, is getting log results.
>
> You might have received "POST_FAILURE" message on jobs since jobs could
> not push their data to logs.o.o.
>
> Once the system is up and running again, feel free to "recheck" your
> jobs where you miss log files and see "POST_FAILURE" reports.
>
> For now, please do not recheck to not fill up our temporary disk and
> keep load low.
>
> Just a reminder:
>
> You can always check the status of the CI infrastructure via:
> * https://wiki.openstack.org/wiki/Infrastructure_Status
> * by following twitter http://twitter.com/openstackinfra
> * Or checking the topic in IRC
>
> And then report problems via #openstack-infra on IRC.
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] logs.o.o corrupt - indicated by POST_FAILURE results

2017-04-01 Thread Andreas Jaeger
Here comes the all-green again ;)

On 2017-03-31 20:28, Andreas Jaeger wrote:
> Since this morning, part of logs.openstack.org is corrupt due to a
> downtime of one of the backing stores. The infra admins are currently
> running fsck and then will take everything back in use.
> 
> Right now, we put the logs on a spare disks so that everything that is
> run, is getting log results.
> 
> You might have received "POST_FAILURE" message on jobs since jobs could
> not push their data to logs.o.o.
> 
> Once the system is up and running again, feel free to "recheck" your
> jobs where you miss log files and see "POST_FAILURE" reports.
> 
> For now, please do not recheck to not fill up our temporary disk and
> keep load low.
> 
> Just a reminder:
> 
> You can always check the status of the CI infrastructure via:
> * https://wiki.openstack.org/wiki/Infrastructure_Status
> * by following twitter http://twitter.com/openstackinfra
> * Or checking the topic in IRC

Jeremy just announced on IRC: "The http://logs.openstack.org/ site is
back in operation; previous logs as well as any uploaded during the
outage should be available again; jobs which failed with POST_FAILURE
can also be safely rechecked."

Thanks to all involved for fixing this!

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] [Nova] Extend attached volume

2017-04-01 Thread TommyLike Hu
There was a time when this feature had been both proposed in Cinder [1] and
Nova [2], but unfortunately no one (correct me if I am wrong) is going to
handle this feature during Pike. We do think extending an online volume is
a beneficial and mostly supported by venders feature. We really don't want
this feature missed from OpenStack and would like to continue on. So anyone
could share your knowledge of how many works are left till now and  where
should I start with?

Thanks
TommyLike.Hu

[1] https://review.openstack.org/#/c/272524/
[2]
https://blueprints.launchpad.net/nova/+spec/nova-support-attached-volume-extend
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] logs.o.o corrupt - indicated by POST_FAILURE results

2017-04-01 Thread ChangBo Guo
Thanks for the reminder!

2017-04-01 2:28 GMT+08:00 Andreas Jaeger :

> Since this morning, part of logs.openstack.org is corrupt due to a
> downtime of one of the backing stores. The infra admins are currently
> running fsck and then will take everything back in use.
>
> Right now, we put the logs on a spare disks so that everything that is
> run, is getting log results.
>
> You might have received "POST_FAILURE" message on jobs since jobs could
> not push their data to logs.o.o.
>
> Once the system is up and running again, feel free to "recheck" your
> jobs where you miss log files and see "POST_FAILURE" reports.
>
> For now, please do not recheck to not fill up our temporary disk and
> keep load low.
>
> Just a reminder:
>
> You can always check the status of the CI infrastructure via:
> * https://wiki.openstack.org/wiki/Infrastructure_Status
> * by following twitter http://twitter.com/openstackinfra
> * Or checking the topic in IRC
>
> And then report problems via #openstack-infra on IRC.
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [infra] lists.openstack.org maintenance Friday, March 31 20:00-23:00 UTC

2017-04-01 Thread Clark Boylan
On Tue, Mar 28, 2017, at 05:14 PM, Jeremy Stanley wrote:
> The Mailman listserv on lists.openstack.org will be offline for an
> upgrade-related maintenance for up to 3 hours (but hopefully much
> less) starting at 20:00 UTC March 31, this coming Friday. This
> activity is scheduled for a relatively low-volume period across our
> lists; during this time, most messages bound for the server will
> queue at the senders' MTAs until the server is back in service and
> so should not result in any obvious disruption.
> 
> Apologies for cross-posting so widely, but we wanted to make sure
> copies of this announcement went to most of our higher-traffic
> lists.

This work has been completed successfully, and receipt of this email
should mostly prove it :). Thank you to everyone that helped out.

Once again apologies for the cross post, but wanted to make sure
everyone that got the notice got this followup too.

Thank you,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Idempotence of the deployment process

2017-04-01 Thread Alex Schultz
Hey folks,

I wanted to raise awareness of the concept of idempotence[0] and how
it affects deployment(s).  In the puppet world, we consider this very
important because since puppet is all about ensuring a desired state
(ie. a system with config files + services). That being said, I feel
that it is important for any deployment tool to be aware of this.
When the same code is applied to the system repeatedly (as would be
the case in a puppet master deployment) the subsequent runs should
result in no changes if there is no need.  If you take a configured
system and rerun the same deployment code you don't want your services
restarting when the end state is supposed to be the same. In the case
of TripleO, we should be able deploy an overcloud and rerun the
deployment process should result in no configuration changes and 0
services being restarted during the process. The second run should
essentially be a noop.

We have recently uncovered various bugs[1][2][3][4] that have
introduced service disruption due to a lack of idempotency causing
service restarts. So when reviewing or developing new code what is
important about the deployment is to think about what happens if I run
this bit of code twice.  There are a few common items that come up
around idempotency. Things like execs in puppet-tripleo should be
refreshonly or use unless/onlyif to prevent running again if
unnecessary.  Additionally in the TripleO configuration it's important
to understand in which step a service is configured and if it possibly
would get deconfigured in another step.  For example, we configure
apache and some wsgi services in step 3. But we currently configure
some additional wsgi openstack services in step 4 which is resulting
in excessive httpd restarts and possible service unavailability[5]
when updates are applied.

Another important place to understand this concept is in upgrades
where we currently allow for ansible tasks to be used. These should
result in an idempotent action when puppet is subsequently run which
means that the two bits of code essentially need to result in the same
configuration. For example in the nova-api upgrades for Newton to
Ocata we needed to run the same commands[6] that would later be run by
puppet to prevent clashing configurations and possible idempotency
problems.

Idempotency issues can cause service disruptions, longer deployment
times for end users, or even possible misconfigurations.  I think it
might be beneficial to add an idempotency periodic job that is
basically a double run of the deployment process to ensure no service
or configuration changes on the second run. Thoughts?  Ideally one in
the gate would be awesome but I think it would take to long to be
feasible with all the other jobs we currently run.

Thanks,
-Alex

[0] http://binford2k.com/content/2015/10/idempotence-not-just-big-scary-word
[1] https://bugs.launchpad.net/tripleo/+bug/1664650
[2] https://bugs.launchpad.net/puppet-nova/+bug/1665443
[3] https://bugs.launchpad.net/tripleo/+bug/1665405
[4] https://bugs.launchpad.net/tripleo/+bug/1665426
[5] https://review.openstack.org/#/c/434016/
[6] https://review.openstack.org/#/c/405241/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging

2017-04-01 Thread Amrith Kumar
Great idea, happy to try it out for Trove. We love o.m.rpc :) But it needs to 
be secure; other comment has been posed in review, I'm doing a talk about o.m 
use by trove in Boston anyway, maybe we can get Melissa to join me for that?

-amrith


-Original Message-
From: Deja, Dawid [mailto:dawid.d...@intel.com] 
Sent: Friday, March 31, 2017 10:41 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [oslo]Introduction of new driver for oslo.messaging

Hi all,

To work around issues with rabbitMQ scalability we'd like to introduce new 
driver in oslo messaging that have nearly no scaling limits[1].
We'd like to have as much eyes on this as possible since we believe that this 
is the technology of the future. Thanks for all reviews.

Dawid Deja

[1] https://review.openstack.org/#/c/452219/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev