Hello, there
mM dear lovely folks out there, I am trying to setup Mitaka on three nodes
(control, compute and network) and referring to this documentation,
http://docs.openstack.org/mitaka/install-guide-ubuntu/neutron-controller-install.html
I don't see any mention of what are the changes to be
Hi KaiQiang,
Thank you for your reply.
As for 1), You are correct in that Magnum does support 2 flavors(one is for
master node and the other is for minion nodes). What I want to address is
whether we should support 2 or N Nova flavors ONLY for minion nodes.
As for 2), We have made Magnum
On Tue, Apr 19, 2016 at 01:14:34PM +0200, Lajos Katona wrote:
> Hi,
>
> In our internal CI system we realized that stable/kilo devstack fails with
> the following stack trace:
> It seems that the root cause is that testresources has a new version 2.0.0
> from 18 April.
>
> I tried to find
On Wed, Apr 20, 2016, at 08:44 PM, Tony Breeds wrote:
> On Thu, Apr 21, 2016 at 02:09:24PM +1200, Robert Collins wrote:
>
> > I also argued at the time that we should aim for entirely automated
> > check-and-update. This has stalled on not figuring out how to run e.g.
> > Neutron unit tests
Indeed it will be a terrific contribution.
Edgar
On Apr 20, 2016, at 4:10 AM, Dina Belova
> wrote:
Folks,
I think Ann's report is super cool and 100% worth publishing on OpenStack
Yea so I was able to get the list of users as the cloud admin using
the "openstack
user list --project proj --domain domain" command.
However, I can't seem to do this as the domain admin (admin on both domain
and project):
--
$ openstack user list --project proj --domain domain
You are not
On Thu, Apr 21, 2016 at 02:09:24PM +1200, Robert Collins wrote:
> I also argued at the time that we should aim for entirely automated
> check-and-update. This has stalled on not figuring out how to run e.g.
> Neutron unit tests against requirements changes - our coverage is just
> too low at the
On Wed, Apr 20, 2016 at 12:14 PM Neil Jerram
wrote:
> A couple of questions about our Austin-related planning tools...
>
> - Can one's calendar at
>
> https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-25
> be exported as .ics, or otherwise
Just wanted to let everyone know:
I am officially cancelling the meeting for Apr 21st (no agenda).
Also, we won't have weekly meeting during the summit week ie on Apr 28th.
Next meeting will be on May 5th. See you all online then.
On 4/20/16 1:27 PM, Nikhil Komawar wrote:
> Hi all,
>
> Last
Just a quick glance over the proposal seems like the networking-sfc also
does the same. In addition, networking-sfc is successfully integrated with
ONOS[1] and planned for ODL[2], OVN [3] & Tacker[4] (without any issues
with the existing API's so far). In addition, If we feel the existing
On 04/20/2016 09:10 PM, Dmitry Sutyagin wrote:
Another correction - the issue is observed in Kilo, not Liberty, sorry
for messing this up. (though this part of the code is identical in L)
On Wed, Apr 20, 2016 at 5:50 PM, Dmitry Sutyagin
>
On 18 April 2016 at 03:13, Doug Hellmann wrote:
> I am organizing a summit session for the cross-project track to
> (re)consider how we manage our list of global dependencies [1].
> Some of the changes I propose would have a big impact, and so I
> want to ensure everyone
We are following:
http://docs.openstack.org/liberty/install-guide-rdo/index.html
Using CentOS7. We've chosen the option 2 for networking, where we have both
a private and public network. We've pretty much following the guide step by
step up to and finishing "Add the dashboard" step. All the
On 20 April 2016 at 05:44, Clint Byrum wrote:
> Excerpts from Michał Jastrzębski's message of 2016-04-18 10:29:20 -0700:
>> What I meant is if you have liberty Nova and liberty Cinder, and you
>> want to upgrade Nova to Mitaka, you also upgrade Oslo to Mitaka and
>> Cinder which
Hello everyone,
Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, April 21st at 9:00 UTC in the #openstack-meeting channel.
The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_April_21st_2016_.280900_UTC.29
On 20 April 2016 at 04:47, Clark Boylan wrote:
> On Tue, Apr 19, 2016, at 08:14 AM, Doug Hellmann wrote:
>> Excerpts from Jeremy Stanley's message of 2016-04-19 15:00:24 +:
>> > On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
>> > [...]
>> > > We have the
On 20 April 2016 at 03:00, Jeremy Stanley wrote:
> On 2016-04-19 09:10:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
>> We have the global list and the upper constraints list, and both
>> are intended to be used to signal to packaging folks what we think
>> ought to be used.
Another correction - the issue is observed in Kilo, not Liberty, sorry for
messing this up. (though this part of the code is identical in L)
On Wed, Apr 20, 2016 at 5:50 PM, Dmitry Sutyagin
wrote:
> Correction:
>
> group_dns = [u'CN=GroupX,OU=Groups,OU=SomeOU,DC=zzz']
>
Correction:
group_dns = [u'CN=GroupX,OU=Groups,OU=SomeOU,DC=zzz']
ra.user_dn.upper() = 'CN=GROUPX,OU=GROUPS,OU=SOMEOU,DC=ZZZ'
So this could work if only:
- string in group_dns was str, not unicode
- text was uppercase
Now the question is - should it be so?
On Wed, Apr 20, 2016 at 5:41 PM,
Hi everybody,
I am observing the following issue:
LDAP backend is enabled for identity and assignment, domain specific
configs disabled.
LDAP section configured - users, groups, projects and roles are mapped.
I am able to use identity v3 api to list users, groups, to verify that a
user is in a
Yes, Tricircle follows HTTP guideline from OpenStack API working group:
https://specs.openstack.org/openstack/api-wg/guidelines/http.html
If something is not followed the guideline, please report a bug to track the
issue.
Best Regards
Chaoyi Huang ( Joe Huang )
From: Shinobu Kinjo
We are tickled pink to announce the release of:
oslo.i18n 3.6.0: Oslo i18n library
This release is part of the newton release series.
With source available at:
http://git.openstack.org/cgit/openstack/oslo.i18n
With package available at:
https://pypi.python.org/pypi/oslo.i18n
Please
We are tickled pink to announce the release of:
oslo.log 3.5.0: oslo.log library
This release is part of the newton release series.
With source available at:
http://git.openstack.org/cgit/openstack/oslo.log
With package available at:
https://pypi.python.org/pypi/oslo.log
Please
We are psyched to announce the release of:
oslo.db 1.7.5: oslo.db library
This release is part of the kilo stable release series.
With source available at:
http://git.openstack.org/cgit/openstack/oslo.db
Please report issues through launchpad:
http://bugs.launchpad.net/oslo.db
For
Hi Guys,
So I am able to find out what role a user has for a specific project, but
have not been able to find a way to list all members in a given project.
Is this doable? Is there a way I can get all members of a existing project
from cli? Don't think horizon exposes this information either.
On Wed, Apr 13, 2016 at 6:07 AM, David Stanek wrote:
> On Wed, Apr 13, 2016 at 3:26 AM koshiya maho
> wrote:
>
>>
>> My request to all keystone cores to give their suggestions about the same.
>>
>>
> I'll test this a little and see if I can see
BTW I should mention that my openstack instance has been integrated into
active directory so horizon does not show any project member information.
Thanks.
On Wed, Apr 20, 2016 at 4:20 PM, Jagga Soorma wrote:
> Hi Guys,
>
> So I am able to find out what role a user has for a
This morning we finally cleaned up the last warnings in api-ref, so now
we can enforce errors on warnings. Woot! That went much faster than I
anticipated, and puts us at a really good place for summit.
The next phase is the content verification phase. This patch is merging
a set of comments at
Its new enough that people haven't thought to ask until recently. The recent
interest is starting in the topic due to Magnum getting mature enough folks are
starting to deploy it and finding out it doesn't solve a bunch of issues they
had thought it would. Its pretty natural. Don't just blow it
I think Magnum much is much closer to Sahara or Trove in its workings. Heat's
orchestration. Thats what the COE does.
Sahara is and has plugins to deploy various Hadoopy like clusters, get them
assembled into something useful, and has a few abstraction api's like "submit a
job to the deployed
> -Original Message-
> From: Keith Bray [mailto:keith.b...@rackspace.com]
> Sent: April-20-16 6:13 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
> Magnum doesn¹t
On Wed, Apr 20, 2016 at 04:13:38PM +, Neil Jerram wrote:
> A couple of questions about our Austin-related planning tools...
>
> - Can one's calendar at
> https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-25
> be exported as .ics, or otherwise integrated into a wider
If Magnum will be focused on installation and management for COE it will be
unclear how much it is different from Heat and other generic
orchestrations. It looks like most of the current Magnum functionality is
provided by Heat. Magnum focus on deployment will potentially lead to
another
Hey Guys,
So turns out that another internal website is creating a cookie with a
domain .xxx.com and our horizon instance creates a cookie with domain
yyy.xxx.com. So, after a user logs into horizon and then maybe visits that
other internal site and comes back to login to horizon they get the
+1 to plugins. it has suited nova/trove/sahara/etc well.
Thanks,
Kevin
From: Keith Bray [keith.b...@rackspace.com]
Sent: Wednesday, April 20, 2016 3:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev]
On Wed, Apr 20, 2016 at 3:10 PM, Boden Russell wrote:
> Today there are a number of places in nova, neutron and perhaps
> elsewhere that employ backoff + timeout strategies (see [1] - [4]).
> While we are working towards a unified approach in neutron for RPC [5],
> it appears
On 4/20/2016 8:25 AM, Miguel Angel Ajo Pelayo wrote:
Inline update.
On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
wrote:
On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes wrote:
On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
[...]
Yes,
The Ops session on the taxonomy of failure can use some input even before the
session itself!
The etherpad is here:
https://etherpad.openstack.org/p/AUS-ops-Taxonomy-of-Failures
And has a brief outline on some of info we'd like to gather. There is also a
google spreadsheet here:
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of. The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself.
On Wed, Apr 20, 2016 at 6:31 AM, Duncan Thomas
wrote:
> On 20 April 2016 at 08:08, koshiya maho
> wrote:
>
>
>> This design was discussed, reviewed and approved in cross-projects [1] and
>> already implemented in nova, cinder and neutron.
>>
Thierry Carrez wrote:
Adrian Otto wrote:
This pursuit is a trap. Magnum should focus on making native container
APIs available. We should not wrap APIs with leaky abstractions. The
lowest common denominator of all COEs is an remarkably low value API
that adds considerable complexity to Magnum
Hi Mark,
I have went though the announcement in details, From my point of view, it seems
to resolve the license issue that was blocking us in before. I have included
the Magnum team in ML to see if our team members have any comment.
Thanks for the support from foundation.
Best regards,
Feel free to take the following (if its similar to what u are thinking)
https://github.com/openstack/anvil/blob/master/anvil/utils.py#L90
IMHO though if its a decorator, the retrying library can already perform
this:
https://pypi.python.org/pypi/retrying
And a couple of the oslo-cores (jd,
Excerpts from Chris Dent's message of 2016-04-20 22:16:10 +0100:
>
> Will the already existing retrying[1] do the job or is it missing
> features (the namespacing thing seems like it could be an issue)
> or perhaps too generic?
>
> [1] https://pypi.python.org/pypi/retrying
>
Yes, please, let's
Will the already existing retrying[1] do the job or is it missing
features (the namespacing thing seems like it could be an issue)
or perhaps too generic?
[1] https://pypi.python.org/pypi/retrying
--
Chris Dent (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent
Boden,
Are you thinking of implementing something which would perform exponentially
backed off calls to some arbitrary function till that method returns with
something other than a timeout?
I think that would be very versatile, and useful in a wide variety of places.
Thanks,
-amrith
>
tl;dr: You're right, but the point I was making was that all distros are
understaff.
Longer version:
On 04/19/2016 06:24 PM, Ian Cordasco wrote:
>> You can also add "Ubuntu" in the list here, as absolutely all OpenStack
>> dependencies are maintained mostly by me, within Debian, and then later
>
On Wed, 2016-04-20 at 14:10 -0600, Boden Russell wrote:
> Anyone adverse to me crafting an initial oslo patch to kick-off the
> details on this one?
Have you evaluated any existing solutions in this space? A quick search
on PyPi turns up "backoff", which seems to provide several backoff
Sounds good to me Boden.
-- Dims
On Wed, Apr 20, 2016 at 4:10 PM, Boden Russell wrote:
> Today there are a number of places in nova, neutron and perhaps
> elsewhere that employ backoff + timeout strategies (see [1] - [4]).
> While we are working towards a unified approach in
On 2016-04-20 16:00:38 -0400 (-0400), Bob Hansen wrote:
> I'm using Liberty to host my infracloud to run gate jobs. Do I
> dare move my systems hosting my compute\controller nodes (hosting
> jenkins workers) to Ubuntu 16.04? How about zuul/jenkins/puppet
> etc?
>
> Anyone have any experience with
On Wed, Apr 20, 2016 at 10:28 AM, Dean Troyer wrote:
> On Wed, Apr 20, 2016 at 9:43 AM, Doug Hellmann
> wrote:
>
>> Cliff looks for commands on demand. If we modify it's command loader to
>> support some "built in" commands, and then implement the
On Wed, Apr 20, 2016 at 6:16 AM, Steven Hardy wrote:
> All,
>
> We discussed some changes to our release cycle in the weekly meeting
> yesterday, namely to align with the intended direction of the puppet
> community to adopt the new cycle-trailing tag[1].
>
> We have also
Today there are a number of places in nova, neutron and perhaps
elsewhere that employ backoff + timeout strategies (see [1] - [4]).
While we are working towards a unified approach in neutron for RPC [5],
it appears such logic could benefit the greater community as a reusable
oslo implementation.
Thanks for the feedback Armando,
Adding missing tag.
Best regards,
Igor.
From: Armando M. [mailto:arma...@gmail.com]
Sent: Wednesday, April 20, 2016 6:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev]
Hi Hongbin.
On Wed, Apr 20, 2016 at 8:13 PM, Hongbin Lu wrote:
>
>
>
>
> From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
> [mailto:li-gong.d...@hpe.com]
> Sent: April-20-16 3:39 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject:
Hi.
On Wed, Apr 20, 2016 at 5:43 PM, Fox, Kevin M wrote:
> If the ops are deploying a cloud big enough to run into that problem, I
> think they can deploy a scaled out docker registry of some kind too, that
> the images can point to? Last I looked, it didn't seem very
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-20-16 9:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum]Cache docker images
Hongbin,
Both of approaches you suggested may only work for one binary format. If you
try to
> -Original Message-
> From: Ian Cordasco [mailto:sigmaviru...@gmail.com]
> Sent: April-20-16 1:56 PM
> To: Adrian Otto; OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
>
>
I'll go ahead and be the guy to ask for N flavors. :)
AZ zones are kind of restrictive in what they can do, so we usually use
flavors, which are much more flexable.
I can totally see a project with 3 different types of flavors and want them all
in the same k8s cluster managed by labels.
On 04/20/2016 04:57 PM, Vasyl Saienko wrote:
> Hello Ironic-staging-drivers team,
>
> At the moment there is no tests for ironic-staging-drivers at the gates.
> I think we need to have a simple test that install drivers with theirs
> dependencies and ensures that ironic-conductor is able to
> On Apr 18, 2016, at 8:37 AM, Sebastien Badia wrote:
>
> Hello here,
>
> I would like to ask to be removed from the core reviewers team on the
> Puppet for OpenStack project.
>
> I lack dedicated time to contribute on my spare time to the project. And I
> don't work anymore on
On Wed, Apr 20, 2016 at 12:41 PM, Ken D'Ambrosio wrote:
> So I'm trying to write up a user/tenant creation script, and then when
> it's done, I want to fire off an e-mail with relevant info to the new
> user. One thing I'd like to send along is the URL for horizon for
> whichever
We are psyched to announce the release of:
python-openstackclient 2.4.0: OpenStack Command-line Client
This release is part of the newton release series.
With source available at:
https://git.openstack.org/cgit/openstack/python-openstackclient
With package available at:
From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) [mailto:li-gong.d...@hpe.com]
Sent: April-20-16 3:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision
minion nodes
Hi Folks,
We are considering
[snip]
I need to be at both of those Heat ones anyway, so this doesn't really
help me. I'd rather have the DLM session in this slot instead. (The only
sessions I can really skip are the Release Model, Functional Tests and
DLM.) That would give us:
HeatTripleO
We are content to announce the release of:
hacking 0.11.0: OpenStack Hacking Guideline Enforcement
With package available at:
https://pypi.python.org/pypi/hacking
For more details, please see below.
Changes in hacking 0.10.1..0.11.0
-
c3b03a9 Updated from
-Original Message-
From: Adrian Otto
Reply: OpenStack Development Mailing List (not for usage questions)
Date: April 19, 2016 at 19:11:07
To: OpenStack Development Mailing List (not for usage questions)
On 4/20/16 1:00 PM, Rico Lin wrote:
Hi team
Let plan for more informal meetup(relax) time! Let all heaters and any
other projects can have fun and chance for technical discussions together.
After discuss in meeting, we will have a pre-meetup-meetup on Friday
morning to have a cup of cafe or
So I'm trying to write up a user/tenant creation script, and then when
it's done, I want to fire off an e-mail with relevant info to the new
user. One thing I'd like to send along is the URL for horizon for
whichever cloud I've just created them accounts on... but I don't see
how to get that
On Wed, Apr 20, 2016 at 5:38 AM, Mathieu Mitchell
wrote:
>
>
> On 2016-04-19 11:29 PM, Tan, Lin wrote:
>
>> I agree this is reasonable to support all these cases in “cold upgrades”
>> but in supports-rolling-upgrade (live upgrade in another word) case it is
>> different
1600 UTC tomorrow (Thursday), right? So if Outlook is calculating it right,
that’s 1100 US Central, 0900 US Pacific?
Thanks,
Mike
From: Matt Van Winkle >
Date: Wednesday, April 20, 2016 at 9:38 AM
To: OpenStack Operators
Yes – that is correct. Thanks, Mike – a great reminder that those of us in
regions with Daylight Savings are now at a different offset :)
Thanks!
VW
From: Mike Dorman >
Date: Wednesday, April 20, 2016 at 12:42 PM
To: Matt Van Winkle
Hello,
There will be no IRC meeting next week due to the OpenStack summit.
Feel free to join us at one of the four Design summit session we will be
holding:
- Wed: 9:50 - 10:30: Backup your OpenStack infrastructure [1]
- Wed: 11:00 - 11:40: Backup as a service [2]
- Wed: 11:50 - 12:30:
NOTE: this is a operator focused session and has been tagged for ops for
it to appear in cross track!
On 4/20/16 1:39 PM, Nikhil Komawar wrote:
> Hi all,
>
> At the Austin summit, I've scheduled a Glance work session [1] for
> gathering input on Glance deployments and feedback surrounding the
NOTE: this is a operator focused session and has been tagged for ops for
it to appear in cross track!
On 4/20/16 1:39 PM, Nikhil Komawar wrote:
> Hi all,
>
> At the Austin summit, I've scheduled a Glance work session [1] for
> gathering input on Glance deployments and feedback surrounding the
Hi all,
At the Austin summit, I've scheduled a Glance work session [1] for
gathering input on Glance deployments and feedback surrounding the same.
Also, I've taken the liberty to propose a few topics related to the same
at the discussion etherpad [2]. These are general discussion items that
you
Hi all,
At the Austin summit, I've scheduled a Glance work session [1] for
gathering input on Glance deployments and feedback surrounding the same.
Also, I've taken the liberty to propose a few topics related to the same
at the discussion etherpad [2]. These are general discussion items that
you
On Wed, Apr 20, 2016 at 9:43 AM, Doug Hellmann
wrote:
> Cliff looks for commands on demand. If we modify it's command loader to
> support some "built in" commands, and then implement the commands in OSC
> that way, we can avoid scanning the real plugin system until we hit
Hi all,
Last week when I asked if we needed a meeting for this week, the poll
[1] resulted in "maybe". I currently do not see any 'specific' agenda
[2] items posted for this week's meeting. I am assuming everyone is busy
going into the summit and the updates can be shared then or the meeting
On 2016-04-19 11:30:38 -0500 (-0500), Ian Cordasco wrote:
[...]
> I've argued with different downstream distributors about their own
> judgment of what portions of the patch to apply in order to fix an
> issue with an assigned CVE. It took much longer than should have
> been necessary in at least
Dmitry,
I mean, currently shotgun fetches services' configuration along with
astute.yaml. These files contain passwords, keys, tokens. I beleive, these
should be sanitized. Or, better yet, there should be an option to sanitize
sensitive data from fetched files.
Aleksandr,
Currently Fuel has a
On 20 April 2016 at 09:31, Duarte Cardoso, Igor <
igor.duarte.card...@intel.com> wrote:
> Dear OpenStack Community,
>
>
>
> We've been investigating options in/around OpenStack for supporting
> Service Function Chaining. The networking-sfc project has made significant
> progress in this space,
Hi team
Let plan for more informal meetup(relax) time! Let all heaters and any
other projects can have fun and chance for technical discussions together.
After discuss in meeting, we will have a pre-meetup-meetup on Friday
morning to have a cup of cafe or some food. Would like to ask if anyone
On 04/20/2016 11:44 AM, Dan Prince wrote:
We've had a run of really spotty CI in TripleO. This is making it
really hard to land patches if reviewers aren't online. Specifically we
seem to get better CI results when the queue is less full (nights and
weekends)... often when core reviewers aren't
Dear OpenStack Community,
We've been investigating options in/around OpenStack for supporting Service
Function Chaining. The networking-sfc project has made significant progress in
this space, and we see lots of value in what has been completed. However, when
we looked at the related IETF
Excerpts from Akihiro Motoki's message of 2016-04-20 13:20:40 +0900:
> Hi,
>
> I noticed Mitaka release notes for neutron *-aas [1,2,3] are not
> referred to from anywhere.
> Neutron has four deliverables (neutron, lbaas, fwaas, vpnaas),
> but only the release note of the main neutron repo is
A couple of questions about our Austin-related planning tools...
- Can one's calendar at
https://www.openstack.org/summit/austin-2016/summit-schedule/#day=2016-04-25
be exported as .ics, or otherwise integrated into a wider calendaring
system?
- Is the app working for anyone else? All I get
So, location of unified api is an interesting topic... Can things be arranged
in such a way that the abstraction largely exists only on the client in its own
project?
So, what about a change of ideas
The three pain points in the abstraction are:
* Authentication
* Connectivity
*
On 20 April 2016 at 00:39, Andreas Jaeger wrote:
> On 2016-04-20 06:20, Akihiro Motoki wrote:
> > Hi,
> >
> > I noticed Mitaka release notes for neutron *-aas [1,2,3] are not
> > referred to from anywhere.
> > Neutron has four deliverables (neutron, lbaas, fwaas, vpnaas),
> > but
If the ops are deploying a cloud big enough to run into that problem, I think
they can deploy a scaled out docker registry of some kind too, that the images
can point to? Last I looked, it didn't seem very difficult. The native docker
registry has ceph support now, so if your running ceph for
There's no significant change with the global EC clusters story in the 2.7
release. That's something we're discussing next week at the summit.
--John
On 19 Apr 2016, at 22:47, Mark Kirkwood wrote:
> Hi,
>
> Has the release of 2.7 significantly changed the assessment here?
>
> Thanks
>
> Mark
We've had a run of really spotty CI in TripleO. This is making it
really hard to land patches if reviewers aren't online. Specifically we
seem to get better CI results when the queue is less full (nights and
weekends)... often when core reviewers aren't around.
One thing that would help is if
Hello LDT folks,
So, I totally dropped the ball last month, but it sounds like most of us were
busy running clouds and forgot about the meeting. I want to make sure we get
together tomorrow, though. The Summit is next week and we have a good amount
of time to work on things there, so I’d like
Hello Ironic-staging-drivers team,
At the moment there is no tests for ironic-staging-drivers at the gates.
I think we need to have a simple test that install drivers with theirs
dependencies and ensures that ironic-conductor is able to start.
It may be performed in the following way. Each
Excerpts from Steve Baker's message of 2016-04-20 16:38:25 +1200:
> On 20/04/16 06:17, Monty Taylor wrote:
> > On 04/19/2016 10:16 AM, Daniel P. Berrange wrote:
> >> On Tue, Apr 19, 2016 at 09:57:56AM -0500, Dean Troyer wrote:
> >>> On Tue, Apr 19, 2016 at 9:06 AM, Adam Young
So I guess the mentioned error should be resolved? Does it work now?
Zitat von Paras pradhan :
Yes I I have it set up in nova.conf and neutron.conf
On Wed, Apr 20, 2016 at 9:11 AM, Eugen Block wrote:
And did you change it to auth_type instead of
No I still have that error. Other than that I don't see any other errors.
On Wed, Apr 20, 2016 at 9:22 AM, Eugen Block wrote:
> So I guess the mentioned error should be resolved? Does it work now?
>
>
>
> Zitat von Paras pradhan :
>
> Yes I I have it set
And did you change it to auth_type instead of auth_plugin? Also you
should make sure that this option is in the correct section of your
config file, for example
[keystone_authtoken]
...
auth_type = password
or
[neutron]
...
auth_type = password
Regards,
Eugen
Zitat von Paras pradhan
> I upgraded release OpenStack from Kilo to Mitaka.
>But now I can't start any Instance. At the compute node nova daemons said:
> 2016-04-20 12:40:15.395 226509 ERROR oslo_messaging.rpc.dispatcher
> [req-86e4ab2e-229a-438d-87d2-d9ce9f749dbc 79401c3a746740ef826762ca9eaeb207
>
Hi Folks,
We will not be having our regular DVR meeting this week and for next week.
We will resume our meeting on May 4th 2016.
Thanks.
Swaminathan Vasudevan
Systems Software Engineer (TC)
HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax:
1 - 100 of 161 matches
Mail list logo