Hi,
On 28.09.2017 16:50, Jesse Pretorius wrote:
[...]
Do any packagers or deployment projects have issues with this
implementation? If there are any issues, what’re your suggestions to
resolve them?
This will still install the files into usr/etc :
$ python setup.py install --skip-build --roo
On 09/29/2017 03:37 PM, Ian Wienand wrote:
I'm not aware of issues other than these at this time
Actually, that is not true. legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons. Any debugging would be helpful,
thanks.
-i
Hi,
There's a few issues with devstack and the new zuulv3 environment
LIBS_FROM_GIT is broken due to the new repos not having a remote
setup, meaning "pip freeze" doesn't give us useful output. [1] just
disables the test as a quick fix for this; [2] is a possible real fix
but should be tried a
My objective is to be able to download and upload from glance/computes to
swift in a faster way.
I was thinking that if glance could parallelizes the connections to swift
for a single image (with chunks), it would be faster.
Am I wrong ?
Is there any other way I am not thinking of?
Arnaud.
Le 28
Hi All,
Thanks for your votes.
As per the majority votes, https://review.openstack.org/#/c/507172/ was
created and merged successfully.
FWaaS meeting would now be held on Thursday 1400 UTC in the
#openstack-fwaas channel from 5th October.
On 22-Sep-2017 5:50 AM, "Furukawa, Yushiro"
wrote:
> Hi,
Please ignore this email. Wrong mailing list :)
Sorry!
Praveen
On Thu, Sep 28, 2017 at 9:16 PM Praveen Yalagandula <
yprav...@avinetworks.com> wrote:
> Siva,
> All changes for #1 are not in 17.1.8; there is a pull-request waiting your
> review for last few days.
> Note that the version EBSCO fin
Jeremy, Clark,
I tried several things, not sure i have enough git-fu to pull this
off. For example.
[dims@dims-mac 00:03] ~/openstack/openstack/mogan ⟩ git push gerrit
HEAD:refs/for/master
Counting objects: 8104, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2350/2350)
Siva,
All changes for #1 are not in 17.1.8; there is a pull-request waiting your
review for last few days.
Note that the version EBSCO finally tested with was making one neutron API
call per one pool PATCH API call. Original one was making three neutron API
calls.
Changes in 17.1.8 only reduce it
Hello API WG,
I've got a patch up for a proposal to fix OSSN-0075 by introducing a
new policy. There are concerns that this will introduce an
interoperability problem in that an API call that works in one
OpenStack cloud may not work in other OpenStack clouds. As author of
the spec, I think this
On Wed, Sep 27, 2017, at 03:24 PM, Monty Taylor wrote:
> Hey everybody,
>
> We're there. It's ready.
>
> We've worked through all of the migration script issues and are happy
> with the results. The cutover trigger is primed and ready to go.
>
> But as it's 21:51 UTC / 16:52 US Central it's a s
I have proposed http://forumtopics.openstack.org/cfp/details/33 and will be
present. Thanks Thierry!
On Thu, Sep 28, 2017 at 9:48 PM, Thierry Carrez
wrote:
> Erik McCormick wrote:
> > [...]
> > Also, if you'd like to discuss this in detail with a room full of
> > bodies, I suggest proposing a se
In this serie of patches we are generalizing the PCI framework to
handle MDEV devices. We arguing it's a lot of patches but most of them
are small and the logic behind is basically to make it understand two
new fields MDEV_PF and MDEV_VF.
That's not really "generalizing the PCI framework to hand
Let's do a Queens spec review sprint.
What day works for people that review specs?
Monday came up in the team meeting today, but Tuesday could be good too
since Monday's are generally evil.
--
Thanks,
Matt
__
OpenStack
On Fri, Sep 29, 2017 at 7:45 AM, Matt Riedemann wrote:
> On 9/21/2017 4:01 PM, Matt Riedemann wrote:
>
>> So this shouldn't be news now that I've read back through a few emails in
>> the mailing list (I've been distracted with the Pike release, PTG planning,
>> etc) [1][2][3] but we have until Se
On 9/21/2017 4:01 PM, Matt Riedemann wrote:
So this shouldn't be news now that I've read back through a few emails
in the mailing list (I've been distracted with the Pike release, PTG
planning, etc) [1][2][3] but we have until Sept 29 to come up with
whatever forum sessions we want to propose.
On Thu, 28 Sep 2017, Premysl Kouril wrote:
Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.
Even if that would be true and overhead memory co
>
> Only the memory mapped for the guest is striclty allocated from the
> NUMA node selected. The QEMU overhead should float on the host NUMA
> nodes. So it seems that the "reserved_host_memory_mb" is enough.
>
Even if that would be true and overhead memory could float in NUMA
nodes it generally d
On 09/28/2017 11:37 AM, Sahid Orentino Ferdjaoui wrote:
Please consider the support of MDEV for the /pci framework which
provides support for vGPUs [0].
Accordingly to the discussion [1]
With this first implementation which could be used as a skeleton for
implementing PCI Devices in Resource Tr
Excerpts from Jesse Pretorius's message of 2017-09-28 17:17:55 +:
> On 9/28/17, 5:11 PM, "Doug Hellmann" wrote:
>
> > In the past we had trouble checking those files into git and gating
> > against the results being "up to date" or not changing in any way
> > because configuration options tha
Hey All,
I've got a spec up for a change I want to implement in Glance for Queens to
enhance the current checksum (md5) functionality with a stronger hash
algorithm. I'm going to do this in such a way that it is easily altered in
the future for new algorithms as they are released. I'd appreciate
I have updated the old oslo.config drivers spec [1] to remove a bunch of
the information about etcd and focus on the secret store use case we
discussed at the PTG in queens. I think this work is a prerequisite for
the plaintext secrets spec [2] work, because castellan already depends
on oslo.config
All,
I am writing to make everyone aware that we have had to move the Cisco
Fibre Channel Zone Manager driver to the unsupported and deprecated status.
CI has not run successfully for the better part of the last year and as
per Cinder's compliance policies, the driver needs to be deprecated a
- Original Message -
> From: "Jay Pipes"
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, September 28, 2017 12:53:16 PM
> Subject: Re: [openstack-dev] [nova] how does UEFI booting of VM manage
> per-instance copies of OVMF_VARS.fd ?
>
> On 09/27/2017 09:09 AM, Waines, Greg wrot
On Fri, Sep 29, 2017 at 12:04 AM, Janki Chhatbar
wrote:
> Hi
>
> I understand during an update, paunch restarts containers whenever the
> hash of image is changed. TRIPLEO_CONFIG_HASH [1] is generated based on
> the config value specified [2] which is default to
> /var/lib/config-data/. Many serv
Welcome to our regular release countdown email.
Development Focus
-
Team focus should be on spec approval and implementation for priority features.
General Information
---
Just one more mention - teams should review their release liaison information
and make sur
Hi
I understand during an update, paunch restarts containers whenever the hash
of image is changed. TRIPLEO_CONFIG_HASH [1] is generated based on the
config value specified [2] which is default to /var/lib/config-data/.
Many services specify path at /var/lib/config-data/puppet-generated/
([3] for
Hey all,
In the weekly meeting on Tuesday, we talked about possible forum
sessions for Sydney. I proposed the following based on the etherpad [0].
* Keystone User & Operator Feedback [1]
* Application Credentials Feedback [2]
* RBAC/Policy Roadmap Feedback [3]
We decided to omit the last p
On 09/28/2017 12:22 PM, Apoorva Deshpande wrote:
> It appears that Cinder started using NFS locks around Sept 19th. That
> resulted in our CI failures as we don't support it. Tempest tests succeeded
> when we added a nolock option in the NFS configuration[1].
>
> Can someone provide more informati
> On Sep 18, 2017, at 12:54 PM, Hongbin Lu wrote:
>
> Hi Chris,
>
> Sorry I missed the meeting since I was not in PTG last week. After a quick
> research on the mission of SIG-K8s, I think we (the OpenStack Zun team) have
> an item that fits well into this SIG, which is the k8s connector fea
Hi Yipei,
Even running through neutron-lbaas I get the same successful test.
Just to double check, you are using the Octavia driver?
stack@devstackpy27-2:~$ sudo ip netns exec
qdhcp-4bcefe3e-038f-4a77-af4f-a560b6316a7a curl 172.21.1.16
Welcome to 172.21.1.17 connection 3
Michael
On Thu, Sep 28,
I just noticed this today, but the
gate-grenade-dsvm-neutron-multinode-live-migration-nv job in the nova
check queue has been 100% fail since around August 18th.
I've reported a bug with the details:
https://bugs.launchpad.net/nova/+bug/1720191
It has something to do with test_live_block_migr
On 9/28/17, 5:11 PM, "Doug Hellmann" wrote:
> In the past we had trouble checking those files into git and gating
> against the results being "up to date" or not changing in any way
> because configuration options that end up in the file are defined in
> libraries used by the services. So as long
On 28/09/17 17:06, Kim-Norman Sahm wrote:
> Hi,
>
> i'm currently testing designate and i have a question about the
> architecture.
> We're using openstack newton with keystone v3 and thus the keystone
> domain/project structure.
>
> I've tried the global nova_fixed and neutron_floating_ip hand
On 09/27/2017 09:09 AM, Waines, Greg wrote:
Hey there ... a question about UEFI booting of VMs.
i.e.
glance image-create --file cloud-2730. qcow --disk-format qcow2
--container-format bare --property “hw-firmware-type=uefi” --name
clear-linux-image
in order to specify that you want to use U
At the Queens PTG in Denver the documentation team members present
discussed a new retention policy for content published to
docs.openstack.org. I have a spec up for review to document that
policy and the steps needed to implement it. This policy will affect
all projects, now that most of the docum
On Thu, Sep 28, 2017 at 12:32 PM, Emilien Macchi wrote:
> On Thu, Sep 28, 2017 at 9:22 AM, Wesley Hayutin
> wrote:
> [...]
> > OK.. I think the solution is to start migrating these jobs to RDO
> Software
> > Factory third party testing.
> >
> > Here is what I propose:
> > 1. Start with an experi
Greetings OpenStack community,
It was a quiet meeting this week, probably due to elmiko being absent. And
probably also due to cdent and edleafe being consumed by work outside of the
SIG. We did note that we are looking forward to our expanded role with the
addition of the SDK developers into t
On Thu, Sep 28, 2017 at 9:22 AM, Wesley Hayutin wrote:
[...]
> OK.. I think the solution is to start migrating these jobs to RDO Software
> Factory third party testing.
>
> Here is what I propose:
> 1. Start with an experiment check job
> https://review.rdoproject.org/r/#/c/9823/
> This will help
On Thu, Sep 28, 2017 at 4:27 PM, Arnaud MORIN wrote:
> Hey all,
> So I finally tested your pull requests, it does not work.
> 1 - For uploads, swiftclient is not using threads when source is given by
> glance:
> https://github.com/openstack/python-swiftclient/blob/master/swiftclient/service.py#L18
It appears that Cinder started using NFS locks around Sept 19th. That
resulted in our CI failures as we don't support it. Tempest tests succeeded
when we added a nolock option in the NFS configuration[1].
Can someone provide more information on this change?
Thanks,
Apoorva
[1] http://openstack-c
On Thu, Sep 28, 2017 at 3:23 AM, Steven Hardy wrote:
> On Thu, Sep 28, 2017 at 8:04 AM, Marios Andreou
> wrote:
> >
> >
> > On Thu, Sep 28, 2017 at 9:50 AM, mathieu bultel
> wrote:
> >>
> >> Hi,
> >>
> >>
> >> On 09/28/2017 05:05 AM, Emilien Macchi wrote:
> >> > I was reviewing https://review.o
At the previous RefStack meeting, the team unanimously decided to move
our weekly meeting from Tuesdays at 19:00 UTC to Tuesdays at 17:00 UTC in
#openstack-meeting-alt. [1][2]
Thanks
Chris
[1]
http://eavesdrop.openstack.org/meetings/refstack/2017/refstack.2017-09-26-19.00.log.html#l-58
[2] https
Any info on this ?
I did launch a VM with UEFI booting and did not see any copy of OVMF_VARS.fd
proactively copied into /etc/nova/instances// .
Maybe Nova only does that on a change to OVMF_VARS.fd ???
( haven’t figured out how to do that )
anyways any info or pointers would be appreciated,
than
Excerpts from Jesse Pretorius's message of 2017-09-28 14:50:24 +:
> There’s some history around this discussion [1], but times have changed and
> the purpose of the patches I’m submitting is slightly different [2] as far as
> I can see – it’s a little more focused and less intrusive.
>
> The
Hi,
i'm currently testing designate and i have a question about the architecture.
We're using openstack newton with keystone v3 and thus the keystone
domain/project structure.
I've tried the global nova_fixed and neutron_floating_ip handlers but all dns
records (for each domains/projects) are
It looks like the conntrack deletion can be skipped for port deletion no?
On bulk deletes of lot of Vms the entries that were deleted never existed in
conntrack table
From looking the patch below seems to go along those lines
https://review.openstack.org/#/c/243994/
Is there a plan to distinguis
Erik,
Thanks for setting up a session for it.
Glad it is driven by Operators.
I will be happy to work with you on the session and run it with you.
Thanks,
Arkady
From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: Thursday, September 28, 2017 7:40 AM
To: Lee Yarwood
Cc: OpenStack Devel
Please consider the support of MDEV for the /pci framework which
provides support for vGPUs [0].
Accordingly to the discussion [1]
With this first implementation which could be used as a skeleton for
implementing PCI Devices in Resource Tracker we provide support for
attaching vGPUs to guests. An
Hey all,
So I finally tested your pull requests, it does not work.
1 - For uploads, swiftclient is not using threads when source is given by
glance:
https://github.com/openstack/python-swiftclient/blob/master/swiftclient/service.py#L1847
2 - For downloads, when requesting the file from swift, it i
There’s some history around this discussion [1], but times have changed and the
purpose of the patches I’m submitting is slightly different [2] as far as I can
see – it’s a little more focused and less intrusive.
The projects which deploy OpenStack from source or using python wheels
currently h
On 09/28/2017 05:29 AM, Sahid Orentino Ferdjaoui wrote:
Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.
What I see in the code/docs doesn't m
Hi, Michael,
Thanks a lot. Look forward to your further test. I try deploying a new
environment, too. Hope it can work well this time.
Best regards,
Yipei
On Wed, Sep 27, 2017 at 10:27 AM, Yipei Niu wrote:
> Hi, Michael,
>
> The instructions are listed as follows.
>
> First, create a net1.
>
On Thu, Sep 28, 2017 at 12:23 AM, Steven Hardy wrote:
[...]
>
> I completely agree we need this coverage, and honestly we should have
> had it a long time ago, but we need to make progress on this last
> critical blocker for pike, while continuing to make progress on the CI
> coverage (which shoul
Thanks Thierry, not sure if I can make the trip, but will have a try :)
On Thu, Sep 28, 2017 at 9:48 PM, Thierry Carrez
wrote:
> Erik McCormick wrote:
> > [...]
> > Also, if you'd like to discuss this in detail with a room full of
> > bodies, I suggest proposing a session for the Forum in Sydney
Erik McCormick wrote:
> [...]
> Also, if you'd like to discuss this in detail with a room full of
> bodies, I suggest proposing a session for the Forum in Sydney. If some
> of the contributors will be there, it would be a good opportunity for
> you to get feedback.
Yes, "Bare metal as a service: I
On 2017-09-28 11:13:56 +1000 (+1000), Tony Breeds wrote:
[...]
> I can see a policy looked more like:
>
> Phase Time frameSummary Changes Supported
> I 0-12 months Maintained release All bugfixes (that meet the
> after release criteria described
Jeremy, Clark,
Filed a change :)
https://review.openstack.org/508151
Thanks,
Dims
On Thu, Sep 28, 2017 at 8:55 AM, Jeremy Stanley wrote:
> On 2017-09-27 20:02:25 -0400 (-0400), Davanum Srinivas wrote:
>> I'd like to avoid the ACL update which will make it different from
>> other projects. Since
On 2017-09-27 20:02:25 -0400 (-0400), Davanum Srinivas wrote:
> I'd like to avoid the ACL update which will make it different from
> other projects. Since we don't expect to do this again, can you please
> help do this?
[...]
He (probably accidentally) left out the word "temporary." The ACL
only n
On Sep 28, 2017 4:31 AM, "Lee Yarwood" wrote:
On 20-09-17 14:56:20, arkady.kanev...@dell.com wrote:
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
Thanks Arkady!
FYI I see that emccormickva has created the following Forum session to
discuss FF upgrades:
http://forumtopics.openstack
BTW, We plan to release 5.33 with the patch https://review.openstack.org/#
/c/500456/ please let me know if you need hold the release.
[ Unreleased changes in openstack/oslo.messaging (master) ]
Changes between 5.32.0 and a9d10d3
* 3a9c01f 2017-09-24 20:25:38 -0700 Fix default value of RPC d
I see the exception now Lajos:
class L2GatewayInUse(exceptions.InUse):
message = _("L2 Gateway '%(gateway_id)s' still has active mappings "
"with one or more neutron networks.")
:-)
On Wed, Sep 27, 2017 at 6:40 PM, Ricardo Noriega De Soto <
rnori...@redhat.com> wrote:
> Hey
Ken, thanks for raising this , Oslo team will send notice early when we
have major changes like this .
2017-09-27 4:17 GMT+08:00 Ken Giusti :
> Hi Folks,
>
> Just a head's up:
>
> In Queens the default access policy for RPC Endpoints will change from
> LegacyRPCAccessPolicy to DefaultRPCAccessPo
On Wed, Sep 27, 2017 at 11:10:40PM +0200, Premysl Kouril wrote:
> > Lastly, qemu has overhead that varies depending on what you're doing in the
> > guest. In particular, there are various IO queues that can consume
> > significant amounts of memory. The company that I work for put in a good
> > b
On 21-09-17 15:10:52, Thierry Carrez wrote:
> Sean Dague wrote:
> > Agreed. We're already at 5 upgrade tags now?
> >
> > I think honestly we're going to need a picture to explain the
> > differences between them. Based on the confusion that kept seeming to
> > come during discussions at the PTG, I
Hi fellow Kuryrs!
It's that time of the cycle again where we hold our virtual project team
gathering[0]. The dates this time are:
October 2nd, 3rd and 4th
The proposed sessions are:
October 2nd 13:00utc: Scale discussion
In this session we'll talk about the recent scale testing we have perf
On 20-09-17 14:56:20, arkady.kanev...@dell.com wrote:
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
Thanks Arkady!
FYI I see that emccormickva has created the following Forum session to
discuss FF upgrades:
http://forumtopics.openstack.org/cfp/details/19
You might want to reach out
Hello Everyone,
This is a friendly reminder that the submission period ends tomorrow (Sep 29th).
Take some time to think about the topics you would like to talk about and submit
them at:
http://forumtopics.openstack.org/cfp/create
Submit your topic before 11:59PM UTC on Friday September 29th!
On Thu, Sep 28, 2017 at 8:04 AM, Marios Andreou wrote:
>
>
> On Thu, Sep 28, 2017 at 9:50 AM, mathieu bultel wrote:
>>
>> Hi,
>>
>>
>> On 09/28/2017 05:05 AM, Emilien Macchi wrote:
>> > I was reviewing https://review.openstack.org/#/c/487496/ and
>> > https://review.openstack.org/#/c/487488/ when
Thanks Sean for raising the concerns. We don't really fork nova but some
parts of the "ABI" of it. For the 2 API surfaces, we have different
strategies, please see explanations below:
On Wed, Sep 27, 2017 at 10:34 PM, Sean Dague wrote:
> On 09/27/2017 09:31 AM, Julia Kreger wrote:
> > [...]
> >>
On Thu, Sep 28, 2017 at 9:50 AM, mathieu bultel wrote:
> Hi,
>
>
> On 09/28/2017 05:05 AM, Emilien Macchi wrote:
> > I was reviewing https://review.openstack.org/#/c/487496/ and
> > https://review.openstack.org/#/c/487488/ when I realized that we still
> > didn't have any test coverage for minor
70 matches
Mail list logo