Hi team,
*TL;DR,* we now focus on developing the use cases for Searchlight to
attract more users as well as contributors.
Here is the report for last week, Stein R-23 [1]. Let me know if you have
any questions.
[1]
https://www.dangtrinh.com/2018/11/searchlight-weekly-report-stein-r-23.html
On Tue, 06 Nov 2018 05:50:03 +0900 Dmitry Tantsur
wrote
>
>
> On Mon, Nov 5, 2018, 20:07 Julia Kreger *removes all of the hats*
> *removes years of dust from unrelated event planning hat, and puts it on for
> a moment*
>
> In my experience, events of any nature where
Sean,
I'm, too, am very interested in this particular discussion and working
towards getting OpenStack working out-of-the-box on FIPS systems. I've
submitted a few patches
(https://review.openstack.org/#/q/owner:%22Joshua+Cornutt%22) recently
and plan on going down my laundry list of patches I've
Hi all,
Time is running out for you to have your say in the T release name
poll. We have just under 3 days left. If you haven't voted please do!
On Tue, Oct 30, 2018 at 04:40:25PM +1100, Tony Breeds wrote:
> Hi folks,
>
> It is time again to cast your vote for the naming of the T Release.
I'm interested in some feedback from the community, particularly those running
OpenStack deployments, as to whether FIPS compliance [0][1] is something folks
are looking for.
I've been seeing small changes starting to be proposed here and there for
things like MD5 usage related to its
Dear Openstack community.
I am setting up pci passthrough for GPUs using aliases.
I was wondering the meaning of the fields device_type and numa_policy and how
should I use them as I could not find much details in the official
documentation.
Greetings,
Please be aware of the following patch [1]. This updates ansible in
queens, rocky, and stein.
This was just pointed out to me, and I didn't see it coming so I thought
I'd email the group.
That is all, thanks
[1] https://review.rdoproject.org/r/#/c/14960
--
Wes Hayutin
Associate
On 11/5/18 3:13 PM, Matt Riedemann wrote:
On 11/5/2018 1:36 PM, Doug Hellmann wrote:
I think the lazy stuff was all about the API responses. The log
translations worked a completely different way.
Yeah maybe. And if so, I came across this in one of the blueprints:
Matt Riedemann writes:
> On 11/5/2018 1:36 PM, Doug Hellmann wrote:
>> I think the lazy stuff was all about the API responses. The log
>> translations worked a completely different way.
>
> Yeah maybe. And if so, I came across this in one of the blueprints:
>
>
On 11/5/2018 1:36 PM, Doug Hellmann wrote:
I think the lazy stuff was all about the API responses. The log
translations worked a completely different way.
Yeah maybe. And if so, I came across this in one of the blueprints:
https://etherpad.openstack.org/p/disable-lazy-translation
Which says
This is the weekly summary of work being done by the Technical Committee
members. The full list of active items is managed in the wiki:
https://wiki.openstack.org/wiki/Technical_Committee_Tracker
We also track TC objectives for the cycle using StoryBoard at:
On 11/5/2018 1:17 PM, Matt Riedemann wrote:
I'm thinking of a case like, resize and instance but rather than
confirm/revert it, the user deletes the instance. That would cleanup the
allocations from the target node but potentially not from the source node.
Well this case is at least not an
On Mon, Nov 5, 2018, 20:07 Julia Kreger *removes all of the hats*
>
> *removes years of dust from unrelated event planning hat, and puts it on
> for a moment*
>
> In my experience, events of any nature where convention venue space is
> involved, are essentially set in stone before being publicly
TC members,
I have updated the liaison assignments to fill in all of the
gaps. Please take a moment to review the list [1] so you know your
assignments.
Next week will be a good opportunity to touch bases with your teams.
Doug
[1]
On 2018-11-05 11:06:14 -0800 (-0800), Julia Kreger wrote:
[...]
> I imagine that as a community, it is near impossible to schedule
> something avoiding holidays for everyone in the community.
[...]
Scheduling events that time of year is particularly challenging
anyway because of the proximity of
On 11/5/2018 1:06 PM, Julia Kreger wrote:
*removes all of the hats*
*removes years of dust from unrelated event planning hat, and puts it
on for a moment*
In my experience, events of any nature where convention venue space is
involved, are essentially set in stone before being publicly
Matt Riedemann writes:
> This is a follow up to a dev ML email [1] where I noticed that some
> implementations of the upgrade-checkers goal were failing because some
> projects still use the oslo_i18n.enable_lazy() hook for lazy log message
> translation (and maybe API responses?).
>
> The
On 11/5/2018 12:28 PM, Mohammed Naser wrote:
Have you dug into any of the operations around these instances to
determine what might have gone wrong? For example, was a live migration
performed recently on these instances and if so, did it fail? How about
evacuations (rebuild from a down host).
*removes all of the hats*
*removes years of dust from unrelated event planning hat, and puts it on
for a moment*
In my experience, events of any nature where convention venue space is
involved, are essentially set in stone before being publicly advertised as
contracts are put in place for hotel
On Fri, 2 Nov 2018 09:47:42 +0100, Andreas Scheuring wrote:
Dear Nova Community,
I want to announce the new focal point for Nova s390x libvirt/kvm.
Please welcome "Cathy Zhang” to the Nova team. She and her team will be
responsible for maintaining the s390x libvirt/kvm Thirdparty CI [1] and
Hello everyone,
My apologies for cross posting but wanted to make sure the various developer
groups saw this.
Rather than use the Infrastructure Onboarding session in Berlin [0] for
infrastructure sysadmin/developer onboarding, I thought we could use the time
for user onboarding. We've got
On Mon, Nov 5, 2018 at 4:17 PM Matt Riedemann wrote:
>
> On 11/4/2018 4:22 AM, Mohammed Naser wrote:
> > Just for information sake, a clean state cloud which had no reported issues
> > over maybe a period of 2-3 months already has 4 allocations which are
> > incorrect and 12 allocations pointing
On Mon, Nov 5, 2018 at 4:06 AM Cédric Jeanneret wrote:
>
> On 11/2/18 2:39 PM, Dan Prince wrote:
> > I pushed a patch[1] to update our containerized deployment
> > architecture docs yesterday. There are 2 new fairly useful sections we
> > can leverage with TripleO's stepwise deployment. They
This is a follow up to a dev ML email [1] where I noticed that some
implementations of the upgrade-checkers goal were failing because some
projects still use the oslo_i18n.enable_lazy() hook for lazy log message
translation (and maybe API responses?).
The very old blueprints related to this
Update: I have yet found co-authors, I'll keep drafting that position
paper [0],[1]. Just did some baby steps so far. I'm open for feedback
and contributions!
PS. Deadline is Nov 9 03:00 UTC, but *may be* it will be extended, if
the event chairs decide to do so. Fingers crossed.
[0]
On 11/2/2018 3:47 AM, Andreas Scheuring wrote:
Dear Nova Community,
I want to announce the new focal point for Nova s390x libvirt/kvm.
Please welcome "Cathy Zhang” to the Nova team. She and her team will be
responsible for maintaining the s390x libvirt/kvm Thirdparty CI [1] and any s390x
On 11/4/2018 10:17 PM, Chen CH Ji wrote:
Yes, this has been discussed for long time and If I remember this
correctly seems S PTG also had some discussion on it (maybe public Cloud
WG ? ), Claudiu has been pushing this for several cycles and he actually
had some code at [1] but no additional
If you are seeing this error when implementing and running the upgrade
check command in your project:
Traceback (most recent call last):
File
"/home/osboxes/git/searchlight/.tox/venv/lib/python3.5/site-packages/oslo_upgradecheck/upgradecheck.py",
line 184, in main
return
Hi Mohammed,
What release of openstack are you using? (ocata, pike, etc)
Also just to confirm my understanding: you do see the SSL connections come
up, but after some time they 'hang' - what do you mean by 'hang'? Do the
connections drop? Or do the connections remain up but you start seeing
Since there have only been approvals for Sergey and Mykola for Patrole core,
welcome to the team!
Felipe
> -Original Message-
> From: BARTRA, RICK
> Sent: Monday, October 29, 2018 2:56 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [qa] patrole] Nominating
Hi all,
Not sure how official the information about the next summit is, but it's on the
web site [1], so I guess worth asking..
Are we planning for the summit to overlap with the May holidays? The 1st of May
is a holiday in big part of the world. We ask people to skip it in addition to
3+
There is not much news this week. There are several open changes which
add the base command framework to projects [1]. Those need reviews from
the related core teams. gmann and I have been trying to go through them
first to make sure they are ready for core review.
There is one neutron change
On 11/5/2018 5:52 AM, Chris Dent wrote:
* We need to have further discussion and investigation on
allocations getting out of sync. Volunteers?
This is something I've already spent a lot of time on with the
heal_allocations CLI, and have already started asking mnaser questions
about this
On 11/4/2018 4:22 AM, Mohammed Naser wrote:
Just for information sake, a clean state cloud which had no reported issues
over maybe a period of 2-3 months already has 4 allocations which are
incorrect and 12 allocations pointing to the wrong resource provider, so I
think this comes does to
On Mon, Nov 5, 2018 at 3:47 AM Bogdan Dobrelya wrote:
>
> Let's also think of removing puppet-tripleo from the base container.
> It really brings the world-in (and yum updates in CI!) each job and each
> container!
> So if we did so, we should then either install puppet-tripleo and co on
> the
Thus we should only read from placement:
> * at compute node startup
> * when a write fails
> And we should only write to placement:
> * at compute node startup
> * when the virt driver tells us something has changed
I agree with this.
We could also prepare an interface for
Thank you for a reply, Flavia:
Hi Bogdan
sorry for the late reply - yesterday was a Holiday here in Brazil!
I am afraid I will not be able to engage in this collaboration with
such a short time...we had to have started this initiative a little
earlier...
That's understandable.
I hoped though
On Sun, 4 Nov 2018, Monty Taylor wrote:
I've floated a half-baked version of this idea to a few people, but lemme try
again with some new words.
What if we added support for serving vendor data files from the root of a
primary URL as-per RFC 5785. Specifically, support deployers adding a
On Sun, 4 Nov 2018, Jay Pipes wrote:
Now that we have generation markers protecting both providers and consumers,
we can rely on those generations to signal to the scheduler report client
that it needs to pull fresh information about a provider or consumer. So,
there's really no need to
On 11/5/18 11:47 AM, Bogdan Dobrelya wrote:
> Let's also think of removing puppet-tripleo from the base container.
> It really brings the world-in (and yum updates in CI!) each job and each
> container!
> So if we did so, we should then either install puppet-tripleo and co on
> the host and
Let's also think of removing puppet-tripleo from the base container.
It really brings the world-in (and yum updates in CI!) each job and each
container!
So if we did so, we should then either install puppet-tripleo and co on
the host and bind-mount it for the docker-puppet deployment task steps
Hi team,
I submit a forum [1] named "Cross-project Open API 3.0 support".
We can make more discussions about that in this forum in berlin.
Feel free to add your ideas here [2].
Welcome to join us.
Thanks very much.
[1]
Dear Mohammed,
with SecuStack we've been integrating end-to-end (E2E) transfer of
secrets into the OpenStack code. From your problem description, it
sounds like our implementation would address some of your points. For
below explanation, I will refer to those secrets as "keys".
Our solution
Sent from my iPhone
> On Nov 5, 2018, at 10:19 AM, Thierry Carrez wrote:
>
> Monty Taylor wrote:
>> [...]
>> What if we added support for serving vendor data files from the root of a
>> primary URL as-per RFC 5785. Specifically, support deployers adding a json
>> file to
Monty Taylor wrote:
[...]
What if we added support for serving vendor data files from the root of
a primary URL as-per RFC 5785. Specifically, support deployers adding a
json file to .well-known/openstack/client that would contain what we
currently store in the openstacksdk repo and were just
Thanks Eric for the patch.
This will help keeping placement calls under control.
Belmiro
On Sun, Nov 4, 2018 at 1:01 PM Jay Pipes wrote:
> On 11/02/2018 03:22 PM, Eric Fried wrote:
> > All-
> >
> > Based on a (long) discussion yesterday [1] I have put up a patch [2]
> > whereby you can set
On 11/2/18 2:39 PM, Dan Prince wrote:
> I pushed a patch[1] to update our containerized deployment
> architecture docs yesterday. There are 2 new fairly useful sections we
> can leverage with TripleO's stepwise deployment. They appear to be
> used somewhat sparingly so I wanted to get the word
Hi all, I'm zhaobo, I was the bug deputy for the last week and I'm afraid that
cannot attending the comming upstream meeting so I'm sending out this report:
Last week there are some high priority bugs for neutron . What a quiet week.
;-) Also some bugs need to attention, I list them here:
48 matches
Mail list logo