On 12/1/2017 10:42 AM, Chris Dent wrote:
December? Wherever does the time go? This is resource providers and
placement update 43. The first one of these was more than a year ago
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107171.html
I like to think they've been
Excerpts from corvus's message of 2017-12-01 16:08:00 -0800:
> Tristan Cacqueray writes:
>
> > Hi,
> >
> > Now that the zuulv3 release is approaching, please find below a
> > follow-up on this spec.
> >
> > The current code could use one more patch[0] to untangle the common
+1
On Thu, Nov 30, 2017 at 10:34 AM, Dan Prince wrote:
> +1
>
> On Wed, Nov 29, 2017 at 2:34 PM, John Trowbridge wrote:
>
>> I would like to propose Ronelle be given +2 for the above repos. She has
>> been a solid contributor to tripleo-quickstart and
Fabien Boucher writes:
>> * finish git driver
>>
>
> If ok for you, I want to propose myself to work on that git driver topic.
> I'll try to
> provide a first patch asap.
That's great, thanks!
There's some code there already, but it has no tests and hasn't been
used in a
Tristan Cacqueray writes:
> Hi,
>
> Now that the zuulv3 release is approaching, please find below a
> follow-up on this spec.
>
> The current code could use one more patch[0] to untangle the common
> config from the openstack provider specific bits. The patch often needs
>
On 11/28/2017 9:13 AM, Gustavo Randich wrote:
(running Mitaka)
When doing block live-migration, if the image / backing file is not
present at destination host, sometimes pre-live migration fails after 60
seconds as shown below. Retrying the migration to the same destination
host succeeds.
On 11/28/2017 9:13 AM, Gustavo Randich wrote:
(running Mitaka)
When doing block live-migration, if the image / backing file is not
present at destination host, sometimes pre-live migration fails after 60
seconds as shown below. Retrying the migration to the same destination
host succeeds.
Hello Glancers,
As discussed at yesterday's Glance meeting, the priority for this week
is getting ready for the release of the Q-2 milestone, so:
1. the scrubber refactor
2. bugs scheduled for Q-2
3. enhanced tests for interoperable image import ("IIR")
I've put a list of patches and their
Hello,
As Queens Milestone 2 approaches its end, here is the second iteration
of updates on Queens Tempest Plugin Split community goal [1].
**Not Started**
Congress
ec2-api
freezer
mistral
monasca
senlin
tacker
Telemetry
Trove
Vitrage
** In Progress **
Cinder
Heat
Ironic
magnum
manila
Neutron
On Fri, Dec 1, 2017 at 2:47 PM, Matt Riedemann wrote:
> Andrew Laski also mentioned in IRC that we didn't replace the original
> instance.image_ref with the shelved image id because the shelve operation
> should be transparent to the end user, they have the same image (not
>
Dear Stackers,
I would like to draw your attention to our latest article about SNIC Science
Cloud (SSC) https://cloud.snic.se
Title:
SNIC Science Cloud (SSC): A National-Scale Cloud Infrastructure for Swedish
Academia
Abstract:
The cloud computing paradigm have fundamentally changed the way
On 12/1/2017 1:25 PM, Mathieu Gagné wrote:
Hi,
On Fri, Dec 1, 2017 at 12:24 PM, Matt Riedemann wrote:
I think we can assert as follows:
2. If we're going to point the instance at an image_ref, we shouldn't delete
that image. I don't have a good reason why besides
Hi all,
I'm following up a thread we started earlier this year with proposals
for fixing RBAC [0]. Just wanted to give a quick update that the
specification has merged [1] and the implementation is underway [2]. I
will have a few more patches up shortly to handle the token scoping bits.
Adding
Hi all,
I'm following up a thread we started earlier this year with proposals
for fixing RBAC [0]. Just wanted to give a quick update that the
specification has merged [1] and the implementation is underway [2]. I
will have a few more patches up shortly to handle the token scoping bits.
Adding
Hi,
On Fri, Dec 1, 2017 at 12:24 PM, Matt Riedemann wrote:
>
> I think we can assert as follows:
>
> 2. If we're going to point the instance at an image_ref, we shouldn't delete
> that image. I don't have a good reason why besides deleting things which
> nova has a reference
We have just restored and submitted new patchsets for the StorPool
block storage driver in the three components:
- os-brick: https://review.openstack.org/#/c/192639/
- cinder: https://review.openstack.org/#/c/220155/
- nova: https://review.openstack.org/#/c/140733/
Now, while we do realize that
On Fri, 1 Dec 2017, Gilles Dubreuil wrote:
Hi Chris,
Thank you for those precious details.
I just added https://review.openstack.org/#/c/524467/ to augment the existing
guidelines [2] and to get started with the API Schema (consumption) topic.
Cool, thanks for doing that. I suspect
I came across this bug during triage today:
https://bugs.launchpad.net/nova/+bug/1732428
It essentially says that unshelving an instance and then resizing that
instance later, depending on the type of image backend, can fail.
It's pointed out that when we complete the unshelve procedure, we
Hey all,
Here is the weekly report for what was accomplished during office hours
this week. Full logs are available [0].
Bug #1734871 in OpenStack Identity (keystone): "overcloud deployment
fails on mistral action DeployStackAction"
https://bugs.launchpad.net/keystone/+bug/1734871
Triaged,
Congrats Daniel
On 01-Dec-2017 10:22 PM, "Jakub Libosvar" wrote:
> Congratulations! Very well deserved! :)
>
> On 01/12/2017 17:45, Lucas Alvares Gomes wrote:
> > Hi all,
> >
> > I would like to welcome Daniel Alvarez to the networking-ovn core team!
> >
> > Daniel has been
Congratulations! Very well deserved! :)
On 01/12/2017 17:45, Lucas Alvares Gomes wrote:
> Hi all,
>
> I would like to welcome Daniel Alvarez to the networking-ovn core team!
>
> Daniel has been contributing with the project for a good time already
> and helping *a lot* with reviews and code.
>
Thanks a lot guys!
It's a pleasure to work with you all :)
Cheers,
Daniel
On Fri, Dec 1, 2017 at 5:48 PM, Miguel Angel Ajo Pelayo wrote:
> Welcome Daniel! :)
>
> On Fri, Dec 1, 2017 at 5:45 PM, Lucas Alvares Gomes > wrote:
>
>> Hi all,
>>
>> I
Welcome Daniel! :)
On Fri, Dec 1, 2017 at 5:45 PM, Lucas Alvares Gomes
wrote:
> Hi all,
>
> I would like to welcome Daniel Alvarez to the networking-ovn core team!
>
> Daniel has been contributing with the project for a good time already
> and helping *a lot* with reviews
Hi all,
I would like to welcome Daniel Alvarez to the networking-ovn core team!
Daniel has been contributing with the project for a good time already
and helping *a lot* with reviews and code.
Welcome onboard man!
Cheers,
Lucas
December? Wherever does the time go? This is resource providers and
placement update 43. The first one of these was more than a year ago
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107171.html
I like to think they've been pretty useful. I know they've helped me
keep
On 11/30/2017 6:05 PM, Nematollah Bidokhti wrote:
Hi,
Our [Fault-Genes WG] has been working on defining the fault
classifications for key OpenStack projects in an effort to support
OpenStack fault management & self-healing.
We have been using machine learning (unsupervised data) as a method
On 12/1/17 5:11 PM, Jiří Stránský wrote:
On 21.11.2017 12:01, Jiří Stránský wrote:
Kubernetes on the overcloud
===
The work on this front started with 2[0][1] patches that some of you
might have
seen and then evolved into using the config download mechanism to
execute
On 2017-11-01 12:21:33 + (+), Harry Mallon wrote:
> Git version 2.15 is the most recent version and has a couple of
> issues with git-review.
[...]
I meant to reply to the thread sooner, but this should now be
working in git-review 1.26.0 (released roughly two weeks ago).
--
Jeremy
On Fri, Dec 1, 2017 at 11:09 AM, Colleen Murphy wrote:
> In the "Making OpenStack More Palatable to Part-Time Contributors"
> Forum session in Sydney, one barrier to contribution that came up was
> keeping up with everything happening in OpenStack. The dev mailing
> list is a
On 21.11.2017 12:01, Jiří Stránský wrote:
Kubernetes on the overcloud
===
The work on this front started with 2[0][1] patches that some of you might have
seen and then evolved into using the config download mechanism to execute these
tasks as part of the undercloud
In the "Making OpenStack More Palatable to Part-Time Contributors"
Forum session in Sydney, one barrier to contribution that came up was
keeping up with everything happening in OpenStack. The dev mailing
list is a firehose and IRC can be just as daunting, especially for
contributors in
Hi,
On Wed, Nov 1, 2017 at 10:47 PM, James E. Blair wrote:
> Hi,
>
> At the PTG we brainstormed a road map for Zuul once we completed the
> infra cutover. I think we're in a position now that we can get back to
> thinking about this, so I've (slightly) cleaned it up and
On Fri, Dec 1, 2017 at 7:54 AM, Emilien Macchi wrote:
> Bogdan and Dmitry's suggestions are imho a bit too much and would lead
> to very very (very) long names... Do we actually want that?
>
No i don't think so. I think -- is ideal for communicating at least the
basics. If we
On Fri, Dec 1, 2017 at 8:05 AM, Alex Schultz wrote:
> On Thu, Nov 30, 2017 at 2:36 PM, Wesley Hayutin wrote:
>> Greetings,
>>
>> Just wanted to share some progress with the containerized undercloud work.
>> Ian pushed some of the patches along and we now
On Fri, Dec 1, 2017 at 3:54 AM, Bogdan Dobrelya wrote:
> On 11/30/17 10:36 PM, Wesley Hayutin wrote:
>>
>> Greetings,
>>
>> Just wanted to share some progress with the containerized undercloud work.
>> Ian pushed some of the patches along and we now have a successful
>>
On 12/01/2017 08:57 AM, si...@turka.nl wrote:
Hi,
I have created a flavor with the following metadata:
quota:disk_write_bytes_sec='10240'
This should limit writing to disk to 10240 bytes (10KB/s). I also tried it
with a higher number (100MB/s).
Using the flavor I have launched an instance and
On Thu, Nov 30, 2017 at 2:36 PM, Wesley Hayutin wrote:
> Greetings,
>
> Just wanted to share some progress with the containerized undercloud work.
> Ian pushed some of the patches along and we now have a successful undercloud
> install with containers.
>
> The initial
Bogdan and Dmitry's suggestions are imho a bit too much and would lead
to very very (very) long names... Do we actually want that?
On Fri, Dec 1, 2017 at 2:02 AM, Sanjay Upadhyay wrote:
>
>
> On Fri, Dec 1, 2017 at 2:17 PM, Bogdan Dobrelya wrote:
>>
>>
Hi Adhi,
Do you mean that you can’t run two VMs each with 8 vCPUs, or do you mean that
you are trying to run one VM with more than 8 vCPUs?
I believe that the cpu_allocation_ratio means that you can re-use each physical
cpu (thread) up to 16 times with different VMs, but each VM is still
I have PCI passthrough enabled for some of my hypervisior hosts. Initial
scheduling for flavors that have the passthrough assigned works properly, i.e.
instances are assigned to hypervisors with the resources and once the available
pool of resources is consumed creation of new instances fails.
On 12/01/2017 05:04 AM, Luigi Toscano wrote:
On Friday, 1 December 2017 01:34:36 CET Monty Taylor wrote:
First and most importantly you need to update python-saharaclient to
make sure it can handle it an unversioned endpoint in the catalog (by
doing discovery) - and that if it finds an
Hello Bernd,
thank you for taking time in answering:)
Unfortunately one of the problems in my configuration is that L3 is
handled directly from ToR switches which do not support NAT, and as
far as I understand NAT should happen at L3 router.
So it's not really a matter of will , I actually
Hello,
On November 29 we came the end of sprint using our new team structure [1],
and here’s the highlights:
Sprint Review:
The goal on this sprint was to reduce the tech debt generated by the other
sprints as a way to reduce the work of the Ruck and Rover.
We choose the most relevant cards in
Hi,
I have created a flavor with the following metadata:
quota:disk_write_bytes_sec='10240'
This should limit writing to disk to 10240 bytes (10KB/s). I also tried it
with a higher number (100MB/s).
Using the flavor I have launched an instance and ran a write speed test.
For an unknown reason,
Hey,
I am interested in getting some feedback on a proposed blueprint for Vitrage.
BLUEPRINT:
TITLE: Add the ability to ‘suppress’ alarms by Alarm Type and/or Resource
When managing a cloud, there are situations where a particular alarm or a set
of alarms from a particular resource are
On 2017-12-01 05:03 AM, 李田清 wrote:
> Hello,
> we test workload partition, and find it's much slower than not
> using it.
> After some review, we find that, after get samples from
> notifications.sample
> ceilometer unpacks them and sends them one by one to the pipe
>
I don't know what works for you, and I am not really a practitioner, but here
are a few suggestions.
- openstack router set --enable-snat for a short window of time. Of course,
that would give access to the entire internet and only limit the time.
- Use egress rules in security groups, or
Ok, fixed - by mistake my scripts where still using an old bundle file - sorry
and thanks!
---
Andreas Scheuring (andreas_s)
On 30. Nov 2017, at 16:11, Andreas Scheuring
wrote:
Hi Felipe,
thanks for your reply.
I now forced the values to be strings (like you
On Friday, 1 December 2017 01:34:36 CET Monty Taylor wrote:
> First and most importantly you need to update python-saharaclient to
> make sure it can handle it an unversioned endpoint in the catalog (by
> doing discovery) - and that if it finds an unversioned endpoint in the
> catalog it knows to
On 11/30/17 10:36 PM, Wesley Hayutin wrote:
Greetings,
Just wanted to share some progress with the containerized undercloud work.
Ian pushed some of the patches along and we now have a successful
undercloud install with containers.
The initial undercloud install works [1]
The idempotency
On 11/30/17 10:36 PM, Wesley Hayutin wrote:
Greetings,
Just wanted to share some progress with the containerized undercloud work.
Ian pushed some of the patches along and we now have a successful
undercloud install with containers.
The initial undercloud install works [1]
The idempotency
Hi!
This is the weekly summary of Technical Committee initiatives. You can
find the full list of all open topics (updated twice a week) at:
https://wiki.openstack.org/wiki/Technical_Committee_Tracker
If you are working on something (or plan to work on something) that is
not on the tracker, feel
Hello,
we test workload partition, and find it's much slower than not using it.
After some review, we find that, after get samples from
notifications.sample
ceilometer unpacks them and sends them one by one to the pipe
ceilometer.pipe.*, this will make the consumer slow.
On Fri, Dec 1, 2017 at 2:17 PM, Bogdan Dobrelya wrote:
> On 11/30/17 8:11 PM, Emilien Macchi wrote:
>
>> A few months ago, we renamed ovb-updates to be
>> tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
>> The name is much longer but it describes better what it's
Hello All,
I'm quite new at Openstack and I'm stil trying to figure out how
things works or are supposed to work.
This is the scenario.
Let's imagine we've spun a new instance on a network which is not
intended to reach or to be reached from an external network (absence
of NAT support at L3
after removing these options from the [keystone_authtoken] section in
cinder.conf snapshots are working:
service_token_roles_required=True
service_token_roles=service
Am Freitag, den 01.12.2017, 10:23 +0100 schrieb Kim-Norman Sahm:
> this is my cinder section of the nova.conf
>
> [cinder]
>
this is my cinder section of the nova.conf
[cinder]
os_region_name=myregion
cross_az_attach=False
catalog_info=volumev3:cinderv3:internalURL
i don't find anything about cinder authentication in the nova config
options. https://docs.openstack.org/ocata/config-reference/compute/conf
On 11/30/2017 02:40 PM, Sean McGinnis wrote:
Development Focus
-
The Queens-2 milestone deadline is December 7th. All projects with specific
deadlines should be wrapping them up in time to submit the release request
before the end of day on the 7th.
General Information
On 11/30/2017 08:11 PM, Emilien Macchi wrote:
A few months ago, we renamed ovb-updates to be
tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
The name is much longer but it describes better what it's doing.
We know it's a job with one controller, one compute and one storage
node,
AFAIR there was attempt to push oslo.policy into Swift but it looks like
the patch was abandoned.
https://review.openstack.org/#/c/149930/
On Fri, Dec 1, 2017 at 8:04 AM, hie...@vn.fujitsu.com wrote:
> FYI, I have updated the topic for Heat's works [1]. And finally no
On 11/30/17 8:11 PM, Emilien Macchi wrote:
A few months ago, we renamed ovb-updates to be
tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
The name is much longer but it describes better what it's doing.
We know it's a job with one controller, one compute and one storage
node, deploying
Hi Felix!
I'm a PhD student at CTU FEE. My own topic was simulation of automatic scaling
using both the event-driven approach and queue network modelling. The second
part of the thesis was on the prediction of web server workload in time. If you
wanted, you could continue and simulate some
62 matches
Mail list logo