Hi!
`novaclient.client.Client` entry-point supports almost the same arguments
as `novaclient.v2.client.Client`. The difference is only in api_version, so
you can set up region via `novaclient.client.Client` in the same way as
`novaclient.v2.client.Client`.
On Mon, Feb 22, 2016 at 6:11 AM, Xav
Thanks Jim for summarizing mid-cycle meeting.
I'd like to clarify the next step for boot-from-volume things.
In the etherpad (https://etherpad.openstack.org/p/ironic-mitaka-midcycle),
there's high level plan:
* High level plan
* Review specs
* Write new specs for the base drivers - This may
On 02/18/2016 11:38 PM, D'Angelo, Scott wrote:
> Cinder team is proposing to add support for API microversions [1]. It came up
> at our mid-cycle that we should add a new /v3 endpoint [2]. Discussions on
> IRC have raised questions about this [3]
>
> Please weigh in on the design decision to
Hi,
In http://docs.openstack.org/developer/python-novaclient/api.html it's got
some pretty clear instructions not to use novaclient.v2.client.Client but I
can't see another way to specify the region - there's more than one in my
installation, and no param for region in novaclient.client.Client
Gary Kotton wrote:
I think that IBM has a very interesting policy in that two IBM cores
should not approve a patch posted by one of their colleagues (that is
what Chris RIP used to tell me). It would be nice if the community would
follow this policy.
Thanks
Gary
Sounds similar to a
Hi Yipei,
One reason for that error is that the API service is down. You can run
"rejoin-stack.sh" under your DevStack folder to enter the "screen" console
of DevStack, to check if services are running well. If you are not familiar
with "screen", which is a window manager for Linux, you can do a
Kiru,
That just means you have put even weight on all your drives, so your
telling swift to store it that way.
So short answer is there is more to it then that. Sure evenly balanced
makes life easier. But it doesn't have to be the case. You can set drive
weights and overload factor to
+1 Nick work!
On Mon, Feb 22, 2016 at 10:27 AM, Ryan Hallisey wrote:
> +1. Nice work Angus!
>
> -Ryan
>
> > On Feb 19, 2016, at 11:51 PM, Michał Jastrzębski
> wrote:
> >
> > +1 on condition that he will appear in kolla itself, after
> > all...you'll be a
limited edition (and hilarious) t-shirts are always fun :)
++ on raspberry pis, those are always a hit.
stevemar
From: Hugh Blemings
To: "OpenStack Development Mailing List (not for usage questions)"
, OpenStack
Hi, Yipei,
Is this port disabled to be visit in your host? Pls. check iptables.
Best Regards
Chaoyi Huang ( Joe Huang )
From: Yipei Niu [mailto:newy...@gmail.com]
Sent: Saturday, February 20, 2016 8:49 PM
To: openstack-dev@lists.openstack.org
Cc: joehuang; Zhiyuan Cai
Subject: [tricircle]
Hi,
I have 3 ZONEs, with different capacity in each. Say I have 4 X 1TB disk
(r0z1 - 1TB, r0z2 - 1TB,r0 z3 - 2TB ).
The ring builder (rebalance code), keep ¼-partitions of all 3 replica in
Zone-3. This is the current default behavior from the rebalance code.
This puts pressure to the
Hello, Ian and Jay,
The issue for the use case to address will be described more detail here:
There are often dozens of data center in telecom operators, that means the
number of data centers > 10 is quite normal, and these data centers are
geo-graphically distributed, with lots of small edge
Hiya,
On 16/02/2016 21:43, Tom Fifield wrote:
Hi all,
I'd like to introduce a new round of community awards handed out by the
Foundation, to be presented at the feedback session of the summit.
Nothing flashy or starchy - the idea is that these are to be a little
informal, quirky ... but still
+1. Nice work Angus!
-Ryan
> On Feb 19, 2016, at 11:51 PM, Michał Jastrzębski wrote:
>
> +1 on condition that he will appear in kolla itself, after
> all...you'll be a kolla core as well right?;)
>
>> On 19 February 2016 at 21:44, Sam Yaple wrote:
>> +1
With nova and Keystone both at v3 is helps to consistent versioning across all
projects.
Still need documentation for transition clients from one API version to next.
With new functionality not available in previous version it should be easier
than API changes.
-Original Message-
From:
On Sun, 21 Feb 2016, Jay Pipes wrote:
I don't see how the shared-state scheduler is getting the most accurate
resource view. It is only in extreme circumstances that the resource-provider
scheduler's view of the resources in a system (all of which is stored without
caching in the database)
> On Feb 21, 2016, at 10:38 AM, Steven Dake (stdake) wrote:
>
> Armando,
>
> I apologize if neutron does not have a limit of 2 core reviewers per company
> – I had heard this through the grapevine but a google search of the mailing
> list shows no such limitation.
It goes
Yingxin, sorry for the delay in responding to this thread. My comments
inline.
On 02/17/2016 12:45 AM, Cheng, Yingxin wrote:
To better illustrate the differences between shared-state,
resource-provider and legacy scheduler, I’ve drew 3 simplified pictures
[1] in emphasizing the location of
On 02/21/2016 12:50 PM, Chris Dent wrote:
In a recent api-wg meeting I set forth the idea that it is both a
bad idea to add lots of different headers and to add headers which
have meaning in the name of the header (rather than just the value).
This proved to a bit confusing, so I was asked to
I think that IBM has a very interesting policy in that two IBM cores should not
approve a patch posted by one of their colleagues (that is what Chris RIP used
to tell me). It would be nice if the community would follow this policy.
Thanks
Gary
From: "Armando M."
On 21 February 2016 at 19:34, Jay S. Bryant
wrote:
> Spent some time talking to Sean about this on Friday afternoon and bounced
> back and forth between the two options. At first, /v3 made the most sense
> to me ... at least it did at the meetup. With people like
In a recent api-wg meeting I set forth the idea that it is both a
bad idea to add lots of different headers and to add headers which
have meaning in the name of the header (rather than just the value).
This proved to a bit confusing, so I was asked to write it up. I
did:
Armando,
I apologize if neutron does not have a limit of 2 core reviewers per company –
I had heard this through the grapevine but a google search of the mailing list
shows no such limitation.
Regards
-steve
From: "Armando M." >
Reply-To:
On 02/20/2016 04:42 PM, Duncan Thomas wrote:
On 20 Feb 2016 00:21, "Walter A. Boring IV" > wrote:
> Not that I'm adding much to this conversation that hasn't been said
already, but I am pro v2 API, purely because of how painful and
Hi sahara-dashaboard team (perhaps horizon and sahara dev team),
I am working on translation support in sahara-dashboard for feature
parity of Liberty horizon.
TL;DR: Why does not sahara-dashboard use 'sahara_dashboard' as INSTALLED_APPS?
Can't we add 'sahara_dashboard' as INSTALLED_APPS?
Long
On 20 February 2016 at 14:06, Kevin Benton wrote:
> I don't think neutron has a limit. There are 4 from redhat and 3 from hp
> and mirantis right now.
> https://review.openstack.org/#/admin/groups/38,members
>
By the way, technically speaking some of those also only limit
On 20 February 2016 at 12:58, Steven Dake (stdake) wrote:
> Neutron, the largest project in OpenStack by active committers and
> reviewers as measured by the governance repository teamstats tool, has a
> limit of 2 core reviewers per company. They do that for a reason. I
>
I'd say 5Gigs should be enough for all the images per distro (maybe
less if we have to squeeze). Since we have have 2 strongly supported
distro 10Gigs. If we would like to add all distros we support, that's
20-25 (I think). That also depends how many older versions we want to
keep (current+stable
So for thin containers, as opposed to data containers, there is no
migration script needed whatsoever. All it takes is to tear down
neutron-agents and start thin containers.
On 21 February 2016 at 06:47, Jeffrey Zhang wrote:
> I like the thin container idea, and I am +1
Hi,
A recently merged patch from 2 weeks ago allows to attach\detach volumes to
a shelved_offload server:
https://review.openstack.org/#/c/259528/
Network operations on shelved_offload server is currently not allowed.
There's a bug and a change proposal from a year-and-a-half ago regarding
this
On 19 February 2016 at 5:58, John Garbutt wrote:
> On 17 February 2016 at 17:52, Clint Byrum wrote:
> > Excerpts from Cheng, Yingxin's message of 2016-02-14 21:21:28 -0800:
> >> Hi,
> >>
> >> I've uploaded a prototype https://review.openstack.org/#/c/280047/ to
> >> testify its
I like the thin container idea, and I am +1 too. But the only concern is
that we MUST provide a robust migrate script( or Ansible role task) to do
the convert stuff. Doesn't we have enough time for this?
On Sun, Feb 21, 2016 at 3:44 PM, Michal Rostecki
wrote:
> On
Yes, the intention from the start was to see if we can converge and use
os-vif
and i certainly see us using it.
On Thu, Feb 18, 2016 at 12:32 PM, Daniel P. Berrange
wrote:
> On Thu, Feb 18, 2016 at 09:01:35AM +, Liping Mao (limao) wrote:
> > Hi Kuryr team,
> >
> > I
Hey everyone,
At the moment OSA installs the python-neutronclient in a few locations
including the containers neutron-server, utility, heat, tempest.
Now neutron has a bunch of sub-projects like networking-l2gw [1],
networking-bgpvpn [2] networking-plumgrid [5] etc, which have their own
Thanks for the update. Will there be an option of connecting remotely? Google
chat? Webex?
From: "Armando M." >
Reply-To: OpenStack List
>
Date: Saturday, February 20, 2016 at
Hello All,
We will have an IRC meeting tomorrow (Monday, 2/22) at 0900 UTC
in #openstack-meeting-4
Please review the expected meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/Dragonflow
You can view last meeting action items and logs here:
36 matches
Mail list logo