Hello everybody,
I would like to ask about the limitations of Erasure Coding in Swift right
now. What can we do to overcome these limitations?
Thank you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubsc
Yes, this just started occurring with Thursday/Fridays updates to the
Ubuntu cloud image upstream of us.
I have posted a patch for Queens here: https://review.openstack.org/#/c/569531
We will be back porting that as soon as we can to the other stable
releases. Please review the backports as they
Hi - let's try this again - this time with pike :-)
Any suggestions on how to get the image builder to create a larger loop
device? I think that's what the problem is.
Thanks in advance.
2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
diskimage_builder.block_device.level1.mbr [-] W
Hi rezroo,
Yes, the recent release of pip 10 broke the disk image building.
There is a patch posted here: https://review.openstack.org/#/c/562850/
pending review that works around this issue for the ocata branch by
pining the pip used for the image building to a version that does not
have this iss
Hello - I'm trying to install a working local.conf devstack ocata on a
new server, and some python packages have changed so I end up with this
error during the build of octavia image:
2018-05-18 01:00:26.276 | Found existing installation: Jinja2 2.8
2018-05-18 01:00:26.280 | Uninsta
2018-04-16 7:46 GMT+00:00 Ian Wienand :
> On 04/15/2018 09:32 PM, Gary Kotton wrote:
>>
>> The gate is currently broken with
>> https://launchpad.net/bugs/1763966.
>> https://review.openstack.org/#/c/561427/
>> Can unblock us in the short term. Any other ideas?
>
>
> I'm thinking this is probably
On 04/15/2018 09:32 PM, Gary Kotton wrote:
The gate is currently broken with
https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/
Can unblock us in the short term. Any other ideas?
I'm thinking this is probably along the lines of the best idea. I
left a fairly long co
Right. Thx Gary :)
> Wiadomość napisana przez Gary Kotton w dniu 16.04.2018,
> o godz. 09:14:
>
> Hi,
> I think that we need https://review.openstack.org/561471 until we have a
> proper solution.
> Thanks
> Gary
>
> On 4/16/18, 10:13 AM, "Slawomir Kaplonski" wrote:
>
>Hi,
>
>I jus
Hi,
I think that we need https://review.openstack.org/561471 until we have a proper
solution.
Thanks
Gary
On 4/16/18, 10:13 AM, "Slawomir Kaplonski" wrote:
Hi,
I just wanted to ask if there is any ongoing work on
https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade fai
Hi,
I just wanted to ask if there is any ongoing work on
https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It
looks that e.g. all grenade jobs in neutron are broken currently :/
> Wiadomość napisana przez Gary Kotton w dniu 15.04.2018,
> o godz. 13:32:
>
> Hi,
> The g
Hi,
The gate is currently broken with https://launchpad.net/bugs/1763966.
https://review.openstack.org/#/c/561427/ Can unblock us in the short term. Any
other ideas?
Thanks
Gary
__
OpenStack Development Mailing List (not for
On Thu, Mar 29, 2018 at 5:21 AM, James E. Blair wrote:
> Hi,
>
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior. This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is more significant for
>
> Neither local nor third-party CI use should be affected. There's no
> change in behavior based on current usage patterns. Only the caveat
> that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
> non-existent package name), it will not automatically be caught.
>
> -Jim
P
Sean McGinnis writes:
> On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
>> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
>> > Hi,
>> >
>> > I've proposed a change to devstack which slightly alters the
>> > LIBS_FROM_GIT behavior. This shouldn't be a significant cha
On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> > Hi,
> >
> > I've proposed a change to devstack which slightly alters the
> > LIBS_FROM_GIT behavior. This shouldn't be a significant change for
> > those using legacy
straints.txt#L411
> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297
>
> If I do
>
> > git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens
>
> And then stack.sh
>
> We will see it is using openstacksdk-0.12.0
Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> Hi,
>
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior. This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is more significan
Hi,
I've proposed a change to devstack which slightly alters the
LIBS_FROM_GIT behavior. This shouldn't be a significant change for
those using legacy devstack jobs (but you may want to be aware of it).
It is more significant for new-style devstack jobs.
The change is at https://review.openstack
stack/requirements/blob/stable/queens/upper-constraints.txt#L297
If I do
git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens
And then stack.sh
We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0
Having said that, we need the older version, how to conf
stack/requirements/blob/stable/queens/upper-constraints.txt#L297
If I do
git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens
And then stack.sh
We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0
Having said that, we need the older version, how to conf
upper-constraints.txt#L297
If I do
> git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens
And then stack.sh
We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0
Having said that, we need the older version, how to configure devstack to use
openstacksdk===0.
On Mon, 5 Mar 2018, 1:02 am Ian Wienand, wrote:
> Hello,
>
> Jens Harbott (frickler) has agreed to take on core responsibilities in
> devstack, so feel free to bug him about reviews :)
>
Yay +1
>
> We have also added the members of qa-release in directly to
> devstack-core, just for visibility
Hello,
Jens Harbott (frickler) has agreed to take on core responsibilities in
devstack, so feel free to bug him about reviews :)
We have also added the members of qa-release in directly to
devstack-core, just for visibility (they already had permissions via
qa-release -> devstack-release -> devst
On 2018-01-24 14:14, Daniel Mellado wrote:
> Hi everyone,
>
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
>
> This is affecting me on Fedora26+ from
On Wed, Jan 24, 2018 at 02:14:40PM +0100, Daniel Mellado wrote:
> Hi everyone,
>
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
>
> This is affecting
Hi everyone,
Since today, when I try to install devstack-plugin-container plugin over
fedora. It complains in here [1] about not being able to sync the cache
for the repo with the following error [2].
This is affecting me on Fedora26+ from different network locations, so I
was wondering if someon
cor...@inaugust.com (James E. Blair) writes:
> "gong_ys2004" writes:
>
>> Hi, everyone
>> I am trying to migrate tacker's functional CI job into new zuul v3
>> framework, but it seems:
>> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
>> https://review.openstack.org/#/
"gong_ys2004" writes:
> Hi, everyone
> I am trying to migrate tacker's functional CI job into new zuul v3 framework,
> but it seems:
> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I have:
> devstack_plugin
The workaround [1] has not landed yet. I saw it has +1 workflow but has not
been merged.
Thanks,
Tong
[1] https://review.openstack.org/#/c/508344/
On Mon, Oct 2, 2017 at 6:51 AM, Mehdi Abaakouk wrote:
> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue
> on tel
, October 2, 2017 2:52 PM
> To: OpenStack Development Mailing List (not for usage questions)
>
> Subject: Re: [openstack-dev] [devstack] zuulv3 gate status;
> LIBS_FROM_GIT failures
>
> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue
Looks like the LIBS_FROM_GIT workarounds have landed, but I still have some
issue
on telemetry integration jobs:
http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wr
I have overlay2 and super fast disk I/O (memory cheat + SSD),
just the CPU freq is not high. The CPU is a Broadwell
and actually it has lot more core (E5-2630V4). Even a 5 year old gamer CPU
can be 2 times
faster on a single core, but cannot compete with all of the cores ;-)
This machine have seen
2017-09-29 5:41 GMT+00:00 Ian Wienand :
> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>
>> I'm not aware of issues other than these at this time
>
>
> Actually, that is not true. legacy-grenade-dsvm-neutron-multinode is
> also failing for unknown reasons. Any debugging would be helpful,
> thanks.
On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
We also have our legacy-telemetry-dsvm-integration-ceilometer broken:
http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setu
2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:
>>
>> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>>
>>> I'm not aware of issues other than these at this time
>>
>>
>> Actually, that is not true. legacy-grenade-dsvm-neutron-multinode is
>>
On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:
On 09/29/2017 03:37 PM, Ian Wienand wrote:
I'm not aware of issues other than these at this time
Actually, that is not true. legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons. Any debugging would be helpful,
On 09/29/2017 03:37 PM, Ian Wienand wrote:
I'm not aware of issues other than these at this time
Actually, that is not true. legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons. Any debugging would be helpful,
thanks.
-i
Hi,
There's a few issues with devstack and the new zuulv3 environment
LIBS_FROM_GIT is broken due to the new repos not having a remote
setup, meaning "pip freeze" doesn't give us useful output. [1] just
disables the test as a quick fix for this; [2] is a possible real fix
but should be tried a
On 26 September 2017 at 07:34, Attila Fazekas wrote:
> decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
>
> Fully pulling all container takes something like ~4.5 min (from localhost,
> one leaf request at a time),
> but on the gate vm we usually have 4 core,
> so it is possible
decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
Fully pulling all container takes something like ~4.5 min (from localhost,
one leaf request at a time),
but on the gate vm we usually have 4 core,
so it is possible to go bellow 2 min with better pulling strategy,
unless we hit so
On Fri, Jun 16, 2017 at 12:06:47PM +1000, Tony Breeds wrote:
> Hi All,
> I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le. I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on 3.1.
On 22 September 2017 at 17:21, Paul Belanger wrote:
> On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> > "if DevStack gets custom images prepped to make its jobs
>> > run faster, won't Triple-O, Kolla, et cetera want
On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> > "if DevStack gets custom images prepped to make its jobs
> > run faster, won't Triple-O, Kolla, et cetera want the same and where
> > do we draw that line?). "
> >
> >
On 22 September 2017 at 11:45, Clark Boylan wrote:
> On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
>> Another, more revolutionary (for good or ill) alternative would be to
>> move gates to run Kolla instead of DevStack. We're working towards
>> registry of images, and we support mos
On Fri, Sep 22, 2017, at 01:18 PM, Attila Fazekas wrote:
> The main offenders reported by devstack does not seams to explain the
> growth visible on OpenstackHealth [1] .
> The logs also stated to disappear which does not makes easy to figure
> out.
>
>
> Which code/infra changes can be related ?
The main offenders reported by devstack does not seams to explain the
growth visible on OpenstackHealth [1] .
The logs also stated to disappear which does not makes easy to figure out.
Which code/infra changes can be related ?
http://status.openstack.org/openstack-health/#/test/devstack?resolut
On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
> Another, more revolutionary (for good or ill) alternative would be to
> move gates to run Kolla instead of DevStack. We're working towards
> registry of images, and we support most of openstack services now. If
> we enable mixed install
On 22 September 2017 at 07:31, Jeremy Stanley wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> "if DevStack gets custom images prepped to make its jobs
>> run faster, won't Triple-O, Kolla, et cetera want the same and where
>> do we draw that line?). "
>>
>> IMHO we can try
On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> "if DevStack gets custom images prepped to make its jobs
> run faster, won't Triple-O, Kolla, et cetera want the same and where
> do we draw that line?). "
>
> IMHO we can try to have only one big image per distribution,
> where the pac
"if DevStack gets custom images prepped to make its jobs
run faster, won't Triple-O, Kolla, et cetera want the same and where
do we draw that line?). "
IMHO we can try to have only one big image per distribution,
where the packages are the union of the packages requested by all team,
minus the pac
On 2017-09-20 15:17:28 +0200 (+0200), Attila Fazekas wrote:
[...]
> The image building was the good old working solution and unless
> the image build become a super expensive thing, this is still the
> best option.
[...]
It became a super expensive thing, and that's the main reason we
stopped doin
On Wed, Sep 20, 2017 at 3:11 AM, Ian Wienand wrote:
> On 09/20/2017 09:30 AM, David Moreau Simard wrote:
>
>> At what point does it become beneficial to build more than one image per
>> OS
>> that is more aggressively tuned/optimized for a particular purpose ?
>>
>
> ... and we can put -dsvm- in
On 09/20/2017 09:30 AM, David Moreau Simard wrote:
At what point does it become beneficial to build more than one image per OS
that is more aggressively tuned/optimized for a particular purpose ?
... and we can put -dsvm- in the jobs names to indicate it should run
on these nodes :)
Older hand
On Tue, Sep 19, 2017 at 9:03 AM, Jeremy Stanley wrote:
>
> In order to reduce image sizes and the time it takes to build
> images, once we had local package caches in each provider we stopped
> pre-retrieving packages onto the images. Is the time spent at this
> stage mostly while downloading pack
On 09/19/2017 11:03 PM, Jeremy Stanley wrote:
On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]
The jobs does 120..220 sec apt-get install and packages defined
/files/debs/general are missing from the images before starting the job.
Is the time spent at this stage mostly while
On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]
> Let's start with the first obvious difference compared to the old-time
> jobs.:
> The jobs does 120..220 sec apt-get install and packages defined
> /files/debs/general are missing from the images before starting the job.
>
> We us
The gate-tempest-dsvm-neutron-full-ubuntu-xenial job is 20..30 min slower
than it supposed to be/used to be.
The extra time has multiple reasons and it is not because we test more :( .
Usually we are just less smart than before.
Huge time increment is visible in devstack as well.
devstack is adve
Hi David,
Thanks for looking into this. I do watch devstack changes every once in a
while but couldn't catch this one in time. The missing pmap -XX flag
problem has been there forever but it used to be non fatal. Now it is,
which is in principle a good change.
I will make sure that it passes aga
Hi,
I was trying to make sure the existing openSUSE jobs passed on Zuul v3
but even the regular v2 jobs are hitting a bug I filed here [1].
As far as I know, these jobs were passing until recently.
This is preventing us from sanity checking that everything works out
of the box for the suse devsta
On 08/02/2017 07:17 AM, Sean Dague wrote:
The 3 node scenarios in Neutron (which are still experimental nv) are
typically failing to bring online the 3rd compute. In cells v2 you have
to explicitly add nodes to the cells. There is a nova-manage command
"discover-hosts" that takes all the compute
An issue with the xenserver CI was identified. Once we get this patch
in, and backported to ocata, it should also address a frequent grenade
multinode fail scenario which is plaguing the gate.
-Sean
On 08/02/2017 07:17 AM, Sean Dague wrote:
The 3 node scenarios in Neutron (which are s
The 3 node scenarios in Neutron (which are still experimental nv) are
typically failing to bring online the 3rd compute. In cells v2 you have
to explicitly add nodes to the cells. There is a nova-manage command
"discover-hosts" that takes all the compute nodes which have checked in,
but aren't yet
On Mon, Jun 19, 2017 at 08:17:53AM -0400, Davanum Srinivas wrote:
> Tony,
>
>
> On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds wrote:
> > On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
> >
> >> Awesome! thanks Tony, some kolla jobs do that for example, but i think
> >> this job
Tony,
On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds wrote:
> On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
>
>> Awesome! thanks Tony, some kolla jobs do that for example, but i think
>> this job is a better one to key off of:
>> http://git.openstack.org/cgit/openstack-infra/p
On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
> Awesome! thanks Tony, some kolla jobs do that for example, but i think
> this job is a better one to key off of:
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
>
> Outline of the
On Sun, Jun 18, 2017 at 7:36 PM, Tony Breeds wrote:
> On Fri, Jun 16, 2017 at 03:59:22PM -0400, Davanum Srinivas wrote:
>> Mikhail,
>>
>> I have a TODO on my list - " adding a job that looks for new releases
>> and uploads them to tarballs periodically "
>
> If you point me to how things are added
On Fri, Jun 16, 2017 at 03:59:22PM -0400, Davanum Srinivas wrote:
> Mikhail,
>
> I have a TODO on my list - " adding a job that looks for new releases
> and uploads them to tarballs periodically "
If you point me to how things are added to that mirror I can work
towards that.
Tony.
signature.a
Mikhail,
I have a TODO on my list - " adding a job that looks for new releases
and uploads them to tarballs periodically "
Thanks,
-- Dims
On Fri, Jun 16, 2017 at 3:32 PM, Mikhail Medvedev wrote:
> On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague wrote:
>> On 06/15/2017 10:06 PM, Tony Breeds wrote:
On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague wrote:
> On 06/15/2017 10:06 PM, Tony Breeds wrote:
>> Hi All,
>> I just push a review [1] to bump the minimum etcd version to
>> 3.2.0 which works on intel and ppc64le. I know we're pretty late in the
>> cycle to be making changes like this but r
On 06/15/2017 10:06 PM, Tony Breeds wrote:
> Hi All,
> I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le. I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on 3.1.x make it harder f
Hi All,
I just push a review [1] to bump the minimum etcd version to
3.2.0 which works on intel and ppc64le. I know we're pretty late in the
cycle to be making changes like this but releasing pike with a dependacy
on 3.1.x make it harder for users on ppc64le (not many but a few :D)
Yours
Hi,
I am working on testing kuryr-kubernetes plugin with opendaylight, using
devstack-ocata, with a 3 node setup.
I am hitting the below error when I execute "nova show ${vm_name} | grep
OS-EXT-STS:vm_state", once the vm is booted.
Keyword 'Verify VM Is ACTIVE' failed after retrying fo
is something I should setup correctly..
>
>
>
> Could not find much help on this from google.
>
>
>
> Can someone please enlighten?
>
>
>
> Thanks
>
> Nidhi
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Fro
oduct Engineering Service)
Sent: Wednesday, January 18, 2017 3:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem
in devstack install - No Network found for private
Hi Andreas,
As in between you suggested to try with default dev
le
> or directory". The reason is, that there is no systemd "user unit file".
> This file gets written in Devstack at:
>
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1512-L1529
>
> For that to happen, a se
(nova, neutron, cinder,...) right now.
>> >
>> > In the long run I would like to understand the plans of etcd3 in
>> > devstack. Are the plans to make the default services dependent on etcd3
>> > in the future?
>> >
>> > Thanks a lot!
>> >
s to make the default services dependent on etcd3
> in the future?
>
> Thanks a lot!
>
> Andreas
>
>
> [1]
> https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597
>
>
>
>
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
>
> Thanks a lot!
>
> Andreas
>
>
> [1]
>
https://github.com/openstack-dev/devstack/commit/546656fc054
like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
>
> Thanks a lot!
>
> Andreas
>
>
> [1]
> https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2]
n the future?
Thanks a lot!
Andreas
[1]
https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
[2] https://review.openstack.org/467597
--
-
Andreas
IRC: andreas_s
__
Open
gets written in Devstack at:
https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1512-L1529
For that to happen, a service must be in the list "ENABLED_SERVICES":
https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3
On Wed, May 3, 2017 at 6:14 PM, Sean Dague wrote:
> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
>>
>>> Screen is going away in Queens.
>>>
>>> Making the dev / test runtimes as similar as possible is really
>>> important. And
These docs are great. As someone who has avoided learning systemd, I really
appreciate
the time folks put into making these docs. Well done.
-Dave
On Wed, May 3, 2017 at 7:14 PM, Sean Dague wrote:
> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-
This is the cantrip in devstack-gate that's collecting the logs into the
compat format:
https://github.com/openstack-infra/devstack-gate/blob/3a21366743d6624fb5c51588fcdb26f818fbd8b5/functions.sh#L794-L797
It's also probably worth dumping the whole journal in native format for
people to download
On 05/03/2017 06:45 PM, James Slagle wrote:
On Tue, May 2, 2017 at 9:19 AM, Monty Taylor wrote:
I absolutely cannot believe I'm saying this given what the change implements
and my general steaming hatred associated with it ... but this is awesome
work and a definite improvement over what existe
On 05/03/2017 07:08 PM, Doug Hellmann wrote:
Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
Screen is going away in Queens.
Making the dev / test runtimes as similar as possible is really
important. And there is so much weird debt around trying to make screen
launch things rel
g, the plan is dropping screen entirely in devstack? I
> > would argue that it is better to keep both screen and systemd, and let
> > users choose one of them based on their preference.
> >
> > Best regards,
> > Hongbin
> >
> >> -Original Mess
On Tue, May 2, 2017 at 9:19 AM, Monty Taylor wrote:
> I absolutely cannot believe I'm saying this given what the change implements
> and my general steaming hatred associated with it ... but this is awesome
> work and a definite improvement over what existed before it. If we're going
> to be stuck
I
> would argue that it is better to keep both screen and systemd, and let users
> choose one of them based on their preference.
>
> Best regards,
> Hongbin
>
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: May-03-17 6:10 AM
-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: May-03-17 6:10 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> default
>
> On 05/02/2017 08:30 AM, Sean Dague wrote:
> > We started running systemd for
On 5/3/2017 5:09 AM, Sean Dague wrote:
If you run into any other issues please pop into #openstack-qa (or
respond to this email) and we'll try to work through them.
Something has definitely gone haywire in the cells v1 job since 5/1 and
the journal log handler:
http://status.openstack.org/el
s also a second issue there, which is calling sudo in the
run_process line. If you need to run as a user/group different than
the default, you need to specify that directly.
The run_process command now supports that -
https://github.com/openstack-dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5
On 05/02/2017 08:30 AM, Sean Dague wrote:
We started running systemd for devstack in the gate yesterday, so far so
good.
The following patch (which will hopefully land soon), will convert the
default local use of devstack to systemd as well -
https://review.openstack.org/#/c/461716/. It also inc
We started running systemd for devstack in the gate yesterday, so far so
good.
The following patch (which will hopefully land soon), will convert the
default local use of devstack to systemd as well -
https://review.openstack.org/#/c/461716/. It also includes substantially
updated documentation.
I just proposed the following defaults change in the gate -
https://review.openstack.org/#/c/460062/
Which means we'll be using systemd by default for started services after
it lands. We'll hold for the week, and plan to land this on Monday. If
you would like to test that your jobs work in advance
On Thu, Apr 13, 2017 at 9:01 PM, Sean Dague wrote:
> One of the many reasons for getting all our API services running wsgi
> under a real webserver is to get out of the custom ports for all
> services game. However, because of some of the limits of apache
> mod_wsgi, we really haven't been able to
This is all merged now. If you run into any issues with real WSGI
running, please poke up in #openstack-qa and we'll see what we can to to
get things ironned out.
-Sean
On 04/18/2017 07:19 AM, Sean Dague wrote:
> Ok, the patch series has come together now, and
> https://review.openstack.o
Ok, the patch series has come together now, and
https://review.openstack.org/#/c/456344/ remains the critical patch.
This introduces a new global config option: "WSGI_MODE", which will be
either "uwsgi" or "mod_wsgi" (for the transition).
https://review.openstack.org/#/c/456717/6/lib/placement sh
One of the many reasons for getting all our API services running wsgi
under a real webserver is to get out of the custom ports for all
services game. However, because of some of the limits of apache
mod_wsgi, we really haven't been able to do that in our development
enviroment. Plus, the moment we
anyway, and started playing around with what this might look like.
The results landed here https://review.openstack.org/#/c/448323/.
Documentation is here
http://git.openstack.org/cgit/openstack-dev/devstack/tree/SYSTEMD.rst
This is currently an opt in. All the services in base devstack however
1 - 100 of 973 matches
Mail list logo