Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-26 Thread Joshua Hesketh
On Sat, Jun 25, 2016 at 10:44 AM, Robert Collins 
wrote:

> Removing the pbr branch should be fine - it was an exceptional thing
> to have that branch in the first place - pbr is consumed by releases
> only, and due to its place in the dependency graph is very very very
> hard to pin or cap.
>

Based off this I have removed the stable/kilo branch from pbr.

Cheers,
Josh



>
> -Rob
>
> On 25 June 2016 at 12:37, Tony Breeds  wrote:
> > On Fri, Jun 24, 2016 at 04:36:03PM -0700, Sumit Naiksatam wrote:
> >> Hi Tony, Thanks for your response, and no worries! We can live with
> >> the kilo-eol tag, no need to try to delete it. And as you suggested,
> >> we can add a second eol tag when we actually EoL this branch.
> >>
> >> As regards reviving the deleted branches, would a bug have to be
> >> created somewhere to track this, or is this already on the radar of
> >> the infra team (thanks in advance if it already is)?
> >
> > No bug needed.  I'll work with the infra team to re-create the branch.
> Just as
> > a level set it wont be this weekend.
> >
> > Yours Tony.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [constraints] Updating stable branch URL

2016-06-26 Thread Tony Breeds
On Mon, Jun 27, 2016 at 03:08:18PM +1000, Sachi King wrote:
> To facilitate upper-constraints on developer systems we have a
> hard-coded URL in projects tox.ini.  This URL needs to change when
> after the openstack/requirements repo has created a branch for the
> stable release.
> 
> This is in reference to [0].  There was some mention of possibly
> adding this to stable-branch creation procedure, but as requirements
> tends to release later, and that it would require a whole bunch of
> special notes this seems sub-optimal.

I'm not certain how this is different to updating .gitreview and the default
branch?  Can't we extend the tools[1] that do that step with the updating of
tox.ini?

Yours Tony.
[1] 
http://git.openstack.org/cgit/openstack-infra/release-tools/tree/make_stable_branch.sh


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [constraints] Updating stable branch URL

2016-06-26 Thread Sachi King
To facilitate upper-constraints on developer systems we have a
hard-coded URL in projects tox.ini.  This URL needs to change when
after the openstack/requirements repo has created a branch for the
stable release.

This is in reference to [0].  There was some mention of possibly
adding this to stable-branch creation procedure, but as requirements
tends to release later, and that it would require a whole bunch of
special notes this seems sub-optimal.

Any thoughts on the best way to support this without a bunch of manual work?

[0]: https://review.openstack.org/#/c/267941/

Cheers,
Sachi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-26 Thread Joshua Hesketh
On Sat, Jun 25, 2016 at 4:20 AM, Sumit Naiksatam 
wrote:

> Hi, I had earlier requested in this thread that the stable/kilo branch
> for the following repos be not deleted:
>
> > openstack/group-based-policy
> > openstack/group-based-policy-automation
> > openstack/group-based-policy-ui
> > openstack/python-group-based-policy-client
>


Hello,

Very sorry that these were removed. I should have checked this thread
closer.

I have recreated stable/kilo branches for each of those projects.

As Tony mentioned, due to the nature of the tags removing those is slightly
more complicated. We can remove them from the git farm upstream but those
who have already fetched the tags will need to manually remove them
locally. And if you do a subsequent (identical in name) kilo-eol tag only
those who had removed the incorrect version locally will fetch down the new
copy. In other words it'll be very easy for what you have tagged as
kilo-eol upstream to become different to what people may have in their
local copy.

As such, if you ever retire the branch it's probably important to name your
kilo-eol tag differently. If you call it something else there's no harm in
removing the kilo-eol upstream to keep it tidy if you so wish (let me know
if you need help with that).

Cheers,
Josh



>
> and the request was ack’ed by Tony (also in this thread). However, I
> just noticed that these branches have been deleted. Can this deletion
> please be reversed?
>
> Thanks,
> ~Sumit.
>
> On Fri, Jun 24, 2016 at 10:32 AM, Andreas Jaeger  wrote:
> > On 06/24/2016 02:09 PM, Joshua Hesketh wrote:
> >>
> >> Hi all,
> >>
> >> I have completed removing stable/kilo branches from the projects listed
> >> [0]*. There are now 'kilo-eol' tags in place at the sha's where the
> >> branches were.
> >>
> >> *There are a couple of exceptions. oslo-incubator was listed but is an
> >> unmaintained project so no further action was required. Tony and I have
> >> also decided to hold off
> >> on openstack-dev/devstack, openstack-dev/grenade, openstack-dev/pbr
> >> and openstack/requirements until we are confident removing the
> >> stable/kilo branch will have no negative effects on the projects who
> >> opt-ed out of being EOL'd.
> >>
> >> In this process we noted a couple of repositories still have branches
> >> from Juno and even earlier. I haven't put together a comprehensive list
> >> of old branches, but rather if your project has an outdated branch that
> >> you would like removed and/or tagged as end-of-life, please let me know.
> >>
> >> For those interested in the script I used or other infra cores looking
> >> to perform this next time, it is up for review in the release-tools
> >> repo: https://review.openstack.org/333875
> >
> >
> > Thanks, Joshua.
> >
> > We're now removing the special handling of kilo branches from
> project-config
> > as well - it looks odd for some projects where kilo is removed from
> > conditions and we still have icehouse or juno. Followup changes are
> welcome.
> >
> > https://review.openstack.org/334008
> > https://review.openstack.org/333910
> > https://review.openstack.org/333977
> >
> > Andreas
> > --
> >  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
> >   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> >GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> >HRB 21284 (AG Nürnberg)
> > GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-26 Thread Joshua Hesketh
On Sat, Jun 25, 2016 at 4:20 AM, Sumit Naiksatam 
wrote:

> Hi, I had earlier requested in this thread that the stable/kilo branch
> for the following repos be not deleted:
>
> > openstack/group-based-policy
> > openstack/group-based-policy-automation
> > openstack/group-based-policy-ui
> > openstack/python-group-based-policy-client
>
> and the request was ack’ed by Tony (also in this thread). However, I
> just noticed that these branches have been deleted. Can this deletion
> please be reversed?
>
> Thanks,
> ~Sumit.
>
> On Fri, Jun 24, 2016 at 10:32 AM, Andreas Jaeger  wrote:
> > On 06/24/2016 02:09 PM, Joshua Hesketh wrote:
> >>
> >> Hi all,
> >>
> >> I have completed removing stable/kilo branches from the projects listed
> >> [0]*. There are now 'kilo-eol' tags in place at the sha's where the
> >> branches were.
> >>
> >> *There are a couple of exceptions. oslo-incubator was listed but is an
> >> unmaintained project so no further action was required. Tony and I have
> >> also decided to hold off
> >> on openstack-dev/devstack, openstack-dev/grenade, openstack-dev/pbr
> >> and openstack/requirements until we are confident removing the
> >> stable/kilo branch will have no negative effects on the projects who
> >> opt-ed out of being EOL'd.
> >>
> >> In this process we noted a couple of repositories still have branches
> >> from Juno and even earlier. I haven't put together a comprehensive list
> >> of old branches, but rather if your project has an outdated branch that
> >> you would like removed and/or tagged as end-of-life, please let me know.
> >>
> >> For those interested in the script I used or other infra cores looking
> >> to perform this next time, it is up for review in the release-tools
> >> repo: https://review.openstack.org/333875
> >
> >
> > Thanks, Joshua.
> >
> > We're now removing the special handling of kilo branches from
> project-config
> > as well - it looks odd for some projects where kilo is removed from
> > conditions and we still have icehouse or juno. Followup changes are
> welcome.
> >
> > https://review.openstack.org/334008
> > https://review.openstack.org/333910
> > https://review.openstack.org/333977
> >
> > Andreas
> > --
> >  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
> >   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> >GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> >HRB 21284 (AG Nürnberg)
> > GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] Overlay L2 networking across OpenStack slides used in OPNFV summit

2016-06-26 Thread joehuang
Hello,

In last week OPNFV summit in Berlin, a presentation was given about "Overlay L2 
networking across OpenStack", the slides is for your reference:


https://docs.google.com/presentation/d/1Cv23dLAmSB57IpD-nt-TH5lrCehcoeiml7HpvgUWauo/edit#slide=id.g1478638225_0_0

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-06-26 Thread Angus Lees
On Mon, 27 Jun 2016 at 12:59 Tony Breeds  wrote:

> On Mon, Jun 27, 2016 at 02:02:35AM +, Angus Lees wrote:
>
> > ***
> > What are we trying to impose on ourselves for upgrades for the present
> and
> > near future (ie: while rootwrap is still a thing)?
> > ***
> >
> > A. Sean says above that we do "offline" upgrades, by which I _think_ he
> > means a host-by-host (or even global?) "turn everything (on the same
> > host/container) off, upgrade all files on disk for that host/container,
> > turn it all back on again".  If this is the model, then we can trivially
> > update rootwrap files during the "upgrade" step, and I don't see any
> reason
> > why we need to discuss anything further - except how we implement this in
> > grenade.
>
> Yes this one.  You must upgrade everything in the host/container to the
> same
> release at the same time.  For example we do NOT support running
> nova@liberty
> and cinder@mitaka.
>

Ack.  Ok .. so what's the additional difficulty around config files?  It
sounds like we can replace all the config files with something completely
different during the update phase, if we wanted to do so.  Indeed, it
sounds like there isn't even a need to manage a deprecation period for
config files since there will never be mismatched code+config (good, means
fewer poorly tested legacy combinations in code).

Specifically, it seems grenade in both doc and code currently describes
something quite a bit stricter than this.  I think what we want is more
like "use devstack to deploy old; run/test; **use devstack to deploy new**
but pointing at existing DB/state_path from old; run/test, interact with
things created with old, etc".

A solution to our immediate rootwrap issue would be to just copy over the
rootwrap configs from 'new' during upgrade always, and this shouldn't even
be controversial.  I can't read body language over email, so .. is everyone
ok with this?  Why is this not what everyone was jumping to suggest before
now?

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-06-26 Thread Tony Breeds
On Mon, Jun 27, 2016 at 02:02:35AM +, Angus Lees wrote:

> ***
> What are we trying to impose on ourselves for upgrades for the present and
> near future (ie: while rootwrap is still a thing)?
> ***
> 
> A. Sean says above that we do "offline" upgrades, by which I _think_ he
> means a host-by-host (or even global?) "turn everything (on the same
> host/container) off, upgrade all files on disk for that host/container,
> turn it all back on again".  If this is the model, then we can trivially
> update rootwrap files during the "upgrade" step, and I don't see any reason
> why we need to discuss anything further - except how we implement this in
> grenade.

Yes this one.  You must upgrade everything in the host/container to the same
release at the same time.  For example we do NOT support running nova@liberty
and cinder@mitaka.

> B. We need to support a mix of old + new code running on the same
> host/container, running against the same config files (presumably because
> we're updating service-by-service, or want to minimise the
> service-unavailability during upgrades to literally just a process
> restart).  So we need to think about how and when we stage config vs code
> updates, and make sure that any overlap is appropriately allowed for
> (expand-contract, etc).

During the Austin summit we clearly said this wasn't a thing we did.  The
discussion was mostly centered around code but if we're running code that
needs filter $x then it's reasonable to have to install that filter at the
same time.

I can't find those points in the etherpad sessions where I thought it was
discussed.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-06-26 Thread Angus Lees
On Fri, 24 Jun 2016 at 19:13 Thierry Carrez  wrote:

> In summary, I think the choice is between (1)+(4) and doing (4)
> directly. How doable is (4) in the timeframe we have ? Do we all agree
> that (4) is the endgame ?
>

I don't make predictions about development timelines within OpenStack.

Yes, I think (4) should be our endgame, and is certainly the most
future-proof approach.  I haven't thought through in detail nor spoken with
the ops community at all about how disruptive such a transition might be.

Thinking just now, I suspect any such transition needs a minimum of one
(probably 2) release cycles.  Theoretically:

- introduce top-of-main code that drops to unpriv uid if root (could
theoretically be done in current N cycle)
- change required deployment method during upgrade to release N+1
- In N+1 cycle require starting as root (or similar)
- In N+1 change top-of-main code to fork-privsep-then-drop instead of
drop-then-later-use-sudo
- So by the time N+2 is deployed we're done.

If we want to give more notice and manage a longer deprecation cycle before
requiring a start-as-root deployment (and I think we want both these
things), then expand as appropriate.

I suspect the change to deployment method will be easier to communicate and
less unpleasant if we do "every" openstack service in the one
tear-the-bandaid-off cycle rather than drag this out over years.  This of
course requires substantial cross-project consensus, coordination, and
timing.   Once the laughing subsides and we assume that isn't going to
happen, then we will need something machine-readable and in-tree that
correctly communicates the desired per-service approach at that point in
time - perhaps we could publish (eg) systemd unit files for our services
and adjust those to User=root/not-root as services switch to the new method
over time.  This requires communicating when in the upgrade cycle systemd
unit files (or the implied equivalents) should be updated, so goto 10 ;)

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] upgrades vs rootwrap

2016-06-26 Thread Angus Lees
On Fri, 24 Jun 2016 at 20:48 Sean Dague  wrote:

> On 06/24/2016 05:12 AM, Thierry Carrez wrote:
> > I'm adding Possibility (0): change Grenade so that rootwrap filters from
> > N+1 are put in place before you upgrade.
>
> If you do that as general course what you are saying is that every
> installer and install process includes overwriting all of rootwrap
> before every upgrade. Keep in mind we do upstream upgrade as offline,
> which means that we've fully shut down the cloud. This would remove the
> testing requirement that rootwrap configs were even compatible between N
> and N+1. And you think this is theoretical, you should see the patches
> I've gotten over the years to grenade because people didn't see an issue
> with that at all. :)
>
> I do get that people don't like the constraints we've self imposed, but
> we've done that for very good reasons. The #1 complaint from operators,
> for ever, has been the pain and danger of upgrading. That's why we are
> still trademarking new Juno clouds. When you upgrade Apache, you don't
> have to change your config files.
>

In case it got lost, I'm 100% on board with making upgrades safe and
straightforward, and I understand that grenade is merely a tool to help us
test ourselves against our process and not an enemy to be worked around.
I'm an ops guy proud and true and hate you all for making openstack hard to
upgrade in the first place :P

Rootwrap configs need to be updated in line with new rootwrap-using code -
that's just the way the rootwrap security mechanism works, since the
security "trust" flows from the root-installed rootwrap config files.

I would like to clarify what our self-imposed upgrade rules are so that I
can design code within those constraints, and no-one is answering my
question so I'm just getting more confused as this thread progresses...

***
What are we trying to impose on ourselves for upgrades for the present and
near future (ie: while rootwrap is still a thing)?
***

A. Sean says above that we do "offline" upgrades, by which I _think_ he
means a host-by-host (or even global?) "turn everything (on the same
host/container) off, upgrade all files on disk for that host/container,
turn it all back on again".  If this is the model, then we can trivially
update rootwrap files during the "upgrade" step, and I don't see any reason
why we need to discuss anything further - except how we implement this in
grenade.

B. We need to support a mix of old + new code running on the same
host/container, running against the same config files (presumably because
we're updating service-by-service, or want to minimise the
service-unavailability during upgrades to literally just a process
restart).  So we need to think about how and when we stage config vs code
updates, and make sure that any overlap is appropriately allowed for
(expand-contract, etc).

C. We would like to just never upgrade rootwrap (or other config) files
ever again (implying a freeze in as_root command lines, effective ~a year
ago).  Any config update is an exception dealt with through case-by-case
process and release notes.


I feel like the grenade check currently implements (B) with a 6 month lead
time on config changes, but the "theory of upgrade" doc and our verbal
policy might actually be (C) (see this thread, eg), and Sean above
introduced the phrase "offline" which threw me completely into thinking
maybe we're aiming for (A).  You can see why I'm looking for clarification
 ;)

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] CI down - No connected Gearman servers

2016-06-26 Thread Emilien Macchi
Dan restarted Gearman, but CI is still failing on something else now:

qemu-img convert -f raw -O qcow2 /opt/stack/new/overcloud-full.raw
/opt/stack/new/overcloud-full.qcow2
qemu-img: error while writing sector 8217856: No space left on device

I still don't have access to anything but I hope it can be fixed by Monday.

Thanks,

On Sat, Jun 25, 2016 at 3:29 PM, Emilien Macchi  wrote:
> Hi all,
>
> CI is currently entirely  red:
> https://bugs.launchpad.net/tripleo/+bug/1594732
> http://logs.openstack.org/11/333511/5/check-tripleo/gate-tripleo-ci-centos-7-nonha/16f72e8/console.html#_2016-06-25_15_43_35_798040
>
> gear.Client.unknown - ERROR - Connection  host: 192.168.1.1 port: 4730> timed out waiting for a response to a
> submit job request:  unique: None>
> gear.NoConnectedServersError: No connected Gearman servers
>
> If someone who has secret access to tripleo CI could restart Gearman
> that would be awesome,
>
> Thanks and enjoy week-end.
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-26 Thread Robert Collins
On 27 June 2016 at 13:20, Jens Rosenboom  wrote:
> 2016-06-22 9:18 GMT+02:00 Victor Stinner :
>> Hi,
>>
>> Current status: only 3 projects are not ported yet to Python 3:
>>
>> * nova (76% done)
>> * trove (42%)
>> * swift (0%)
>>
>>https://wiki.openstack.org/wiki/Python3
>
> How should differences between python3.4 and python3.5 be handled?

Like most minor version differences - write code that handles them.
The test in https://bugs.launchpad.net/neutron/+bug/1559191is overly
literal, since it depends on the string of the error from Python,
which is not a stable interface

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Status of the OpenStack port to Python 3

2016-06-26 Thread Jens Rosenboom
2016-06-22 9:18 GMT+02:00 Victor Stinner :
> Hi,
>
> Current status: only 3 projects are not ported yet to Python 3:
>
> * nova (76% done)
> * trove (42%)
> * swift (0%)
>
>https://wiki.openstack.org/wiki/Python3

How should differences between python3.4 and python3.5 be handled?

Ubuntu Xenial contains only python3.5 and no packages for 3.4 anymore,
so when planning to run services with python3, one would have to
support 3.5, but there are unsolved issues like
https://bugs.launchpad.net/neutron/+bug/1559191

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistal] Mistral logo ideas?

2016-06-26 Thread hie...@vn.fujitsu.com
Hi folks,

Maybe smth simple like that: http://prntscr.com/blhcyq 

De : Ilya Kutukov [ikutu...@mirantis.com]
Envoyé : vendredi 24 juin 2016 13:25
À : OpenStack Development Mailing List (not for usage questions)
Objet : Re: [openstack-dev] [mistal] Mistral logo ideas?
Maybe smth like this


On Fri, Jun 24, 2016 at 2:18 PM, Ilya Kutukov  wrote:
Here is top-down projection 
https://www.the-blueprints.com/blueprints-depot/ships/ships-france/nmf-mistral-l9013.png

On Fri, Jun 24, 2016 at 2:17 PM, Ilya Kutukov  wrote:
Look, Mistral landing markup (white stripes and circles with numbers) looks 
like tasks queue: 
https://patriceayme.files.wordpress.com/2014/05/mistral.jpg

On Fri, Jun 24, 2016 at 12:55 PM, Hardik  
wrote:
+1 :) 

On Friday 24 June 2016 03:08 PM, Nikolay Makhotkin wrote:
I like the idea of the logo being a stylized wind turbine. Perhaps it could be
a turbine with a gust of wind. Then we can show that Mistral harnesses the 
power of the wind :-)

I like this idea! Combine Mistral functionality symbol with wind :)

-- 
Best Regards,
Nikolay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][tripleo] Tripleo holding on to old, bad data

2016-06-26 Thread Steve Baker
Assuming the stack is deleted and nova is showing no servers, you likely 
have ironic nodes which are not in a state which can be scheduled.


Do an ironic node-list, you want Power State: Off, Provisioning State: 
available, Maintenance: False



On 25/06/16 09:27, Adam Young wrote:
A coworker and I have both had trouble recovering from failed 
overcloud deploys.  I've wiped out whatever data I can, but, even with 
nothing in the Heat Database, doing an


openstack overcloud deploy

seems to be looking for a specific Nova server by UUID:


heat resource-show 93afc25e-1ab2-4773-9949-6906e2f7c115 0

| resource_status_reason | ResourceInError: 
resources[0].resources.Controller: Went to status ERROR due 
t│·
o "Message: No valid host was found. There are not enough hosts 
available., Code: 500" | 
│·

| resource_type  | OS::TripleO::Controller


Inside the Nova log I see:


2016-06-24 21:05:06.973 15551 DEBUG nova.api.openstack.wsgi 
[req-c8a5179c-2adf-45a6-b186-7d7b29cd8f39 
bcd│·fefb36f3ca9a8f3cfa445ab40 
ec662f250a85453cb40054f3aff49b58 - - -] Returning 404 to user: 
Instance 
8f9│·0c961-4609-4c9b-9d62-360a40f88eed 
could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/│·

openstack/wsgi.py:1070


How can I get the undercloud back to a clean state?


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Merge IRC channels

2016-06-26 Thread Kirill Zaitsev
Just wanting to share an opinion:

We in murano had similar discussion about a year ago, and ultimately decided 
that it’s not worth the work to rename #murano into #openstack-murano, support 
the deprectated channel, edit documents, and move people around. After all 
there is #heat #tacker and #tripleo See for yourself: 
https://wiki.openstack.org/wiki/IRC

BTW looks like none of the #fuel-X channels is on the list. Since you’re making 
the changes it might be a good idea to update the wiki =) 

-- 
Kirill Zaitsev
Software Engineer
Mirantis, Inc

On 25 June 2016 at 12:40:17, Roman Prykhodchenko (m...@romcheg.me) wrote:

Since Fuel is a part of OpenStack now, should we rename #fuel to 
#openstack-fuel?

- romcheg
24 черв. 2016 р. о 18:49 Andrew Woodward  написав(ла):

There is also #fuel-devops

I never liked having all the channels, so +1

On Fri, Jun 24, 2016 at 4:25 AM Anastasia Urlapova  
wrote:
Vova,
please don't forget merge #fuel-qa into a #fuel 

On Fri, Jun 24, 2016 at 1:55 PM, Vladimir Kozhukalov  
wrote:
Nice. #fuel-infra is to merged as well.

Vladimir Kozhukalov

On Fri, Jun 24, 2016 at 1:50 PM, Aleksandra Fedorova  
wrote:
And +1 for #fuel-infra

As of now it will be more useful if infra issues related to project will be 
visible for project developers. We don't have much infra-related traffic on IRC 
for now, and we will be able to split again if we got it.

On Fri, Jun 24, 2016 at 1:26 PM, Vladimir Kozhukalov  
wrote:
Dear colleagues,

We have a few IRC channels but the level of activity there is quite low.

#fuel
#fuel-dev
#fuel-python
#fuel-infra

My suggestion is to merge all these channels into a single IRC channel #fuel.
Other #fuel-* channels are to be deprecated.

What do you think of this?


Vladimir Kozhukalov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Aleksandra Fedorova
Fuel CI Engineer
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  


signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][Python 3.4-3.5] Async python clients

2016-06-26 Thread Denis Makogon
Hello stackers.


I know that some work in progress to bring Python 3.4 compatibility to
backend services and it is kinda hard question to answer, but i'd like to
know if there are any plans to support asynchronous HTTP API client in the
nearest future using aiohttp [1] (PEP-3156)?

If yes, could someone describe current state?

This question is being asked because i've been working on AIOrchestra [2]
(async TOSCA orchestration framework) and its OpenStack plugin [3], so from
its design approach i need to use asynchronous HTTP API clients in order to
get full power from UVLoop, AsyncIO event loop over Python 3.5 for fast,
lightweight and reliable orchestration. But current clients are still
synchronous and only Py3.4 or greater parser-compatible. Major problem
appears when you trying to provision resource that requires to have some
time to reach ACTIVE/COMPLETED state (like, nova instance, stack, trove
database, etc.) and you have to use polling for status changes and in
general polling requires to send HTTP requests within specific time frame
defined by number of polling retries and delays between them (almost all
PaaS solutions in OpenStack are doing it that might be the case of
distributed backend services, but not for async frameworks).


[1] https://github.com/KeepSafe/aiohttp
[2] https://github.com/aiorchestra/aiorchestra
[3] https://github.com/aiorchestra/aiorchestra-openstack-plugin


Kind regards,
Denys Makogon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev