Yes, this just started occurring with Thursday/Fridays updates to the
Ubuntu cloud image upstream of us.
I have posted a patch for Queens here: https://review.openstack.org/#/c/569531
We will be back porting that as soon as we can to the other stable
releases. Please review the backports as they
Hi rezroo,
Yes, the recent release of pip 10 broke the disk image building.
There is a patch posted here: https://review.openstack.org/#/c/562850/
pending review that works around this issue for the ocata branch by
pining the pip used for the image building to a version that does not
have this
2018-04-16 7:46 GMT+00:00 Ian Wienand :
> On 04/15/2018 09:32 PM, Gary Kotton wrote:
>>
>> The gate is currently broken with
>> https://launchpad.net/bugs/1763966.
>> https://review.openstack.org/#/c/561427/
>> Can unblock us in the short term. Any other ideas?
>
>
> I'm
On 04/15/2018 09:32 PM, Gary Kotton wrote:
The gate is currently broken with
https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/
Can unblock us in the short term. Any other ideas?
I'm thinking this is probably along the lines of the best idea. I
left a fairly long
Right. Thx Gary :)
> Wiadomość napisana przez Gary Kotton w dniu 16.04.2018,
> o godz. 09:14:
>
> Hi,
> I think that we need https://review.openstack.org/561471 until we have a
> proper solution.
> Thanks
> Gary
>
> On 4/16/18, 10:13 AM, "Slawomir Kaplonski"
Hi,
I think that we need https://review.openstack.org/561471 until we have a proper
solution.
Thanks
Gary
On 4/16/18, 10:13 AM, "Slawomir Kaplonski" wrote:
Hi,
I just wanted to ask if there is any ongoing work on
Hi,
I just wanted to ask if there is any ongoing work on
https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It
looks that e.g. all grenade jobs in neutron are broken currently :/
> Wiadomość napisana przez Gary Kotton w dniu 15.04.2018,
> o godz.
On Thu, Mar 29, 2018 at 5:21 AM, James E. Blair wrote:
> Hi,
>
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior. This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is
>
> Neither local nor third-party CI use should be affected. There's no
> change in behavior based on current usage patterns. Only the caveat
> that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
> non-existent package name), it will not automatically be caught.
>
> -Jim
Sean McGinnis writes:
> On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
>> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
>> > Hi,
>> >
>> > I've proposed a change to devstack which slightly alters the
>> > LIBS_FROM_GIT behavior. This
On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> > Hi,
> >
> > I've proposed a change to devstack which slightly alters the
> > LIBS_FROM_GIT behavior. This shouldn't be a significant change for
> > those using legacy
On Fri, Mar 16, 2018 at 02:29:51PM +, Kwan, Louie wrote:
> In the stable/queens branch, since openstacksdk0.11.3 and
> os-service-types1.1.0 are described in openstack's upper-constraints.txt,
>
> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
>
Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> Hi,
>
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior. This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is more
On 03/16/2018 09:29 AM, Kwan, Louie wrote:
In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0
are described in openstack's upper-constraints.txt,
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
On 3/16/2018 9:29 AM, Kwan, Louie wrote:
In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0
are described in openstack's upper-constraints.txt,
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
On Mon, 5 Mar 2018, 1:02 am Ian Wienand, wrote:
> Hello,
>
> Jens Harbott (frickler) has agreed to take on core responsibilities in
> devstack, so feel free to bug him about reviews :)
>
Yay +1
>
> We have also added the members of qa-release in directly to
>
On 2018-01-24 14:14, Daniel Mellado wrote:
> Hi everyone,
>
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
>
> This is affecting me on Fedora26+ from
On Wed, Jan 24, 2018 at 02:14:40PM +0100, Daniel Mellado wrote:
> Hi everyone,
>
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
>
> This is affecting
cor...@inaugust.com (James E. Blair) writes:
> "gong_ys2004" writes:
>
>> Hi, everyone
>> I am trying to migrate tacker's functional CI job into new zuul v3
>> framework, but it seems:
>> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
>>
"gong_ys2004" writes:
> Hi, everyone
> I am trying to migrate tacker's functional CI job into new zuul v3 framework,
> but it seems:
> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I
The workaround [1] has not landed yet. I saw it has +1 workflow but has not
been merged.
Thanks,
Tong
[1] https://review.openstack.org/#/c/508344/
On Mon, Oct 2, 2017 at 6:51 AM, Mehdi Abaakouk wrote:
> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
>
, October 2, 2017 2:52 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [devstack] zuulv3 gate status;
> LIBS_FROM_GIT failures
>
> Looks like the LIBS_FROM_GIT workarounds have land
Looks like the LIBS_FROM_GIT workarounds have landed, but I still have some
issue
on telemetry integration jobs:
http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk
I have overlay2 and super fast disk I/O (memory cheat + SSD),
just the CPU freq is not high. The CPU is a Broadwell
and actually it has lot more core (E5-2630V4). Even a 5 year old gamer CPU
can be 2 times
faster on a single core, but cannot compete with all of the cores ;-)
This machine have
2017-09-29 5:41 GMT+00:00 Ian Wienand :
> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>
>> I'm not aware of issues other than these at this time
>
>
> Actually, that is not true. legacy-grenade-dsvm-neutron-multinode is
> also failing for unknown reasons. Any debugging would
On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
We also have our legacy-telemetry-dsvm-integration-ceilometer broken:
2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:
>>
>> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>>
>>> I'm not aware of issues other than these at this time
>>
>>
>> Actually, that is not true.
On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:
On 09/29/2017 03:37 PM, Ian Wienand wrote:
I'm not aware of issues other than these at this time
Actually, that is not true. legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons. Any debugging would be
On 09/29/2017 03:37 PM, Ian Wienand wrote:
I'm not aware of issues other than these at this time
Actually, that is not true. legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons. Any debugging would be helpful,
thanks.
-i
On 26 September 2017 at 07:34, Attila Fazekas wrote:
> decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
>
> Fully pulling all container takes something like ~4.5 min (from localhost,
> one leaf request at a time),
> but on the gate vm we usually have 4
decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
Fully pulling all container takes something like ~4.5 min (from localhost,
one leaf request at a time),
but on the gate vm we usually have 4 core,
so it is possible to go bellow 2 min with better pulling strategy,
unless we hit
On Fri, Jun 16, 2017 at 12:06:47PM +1000, Tony Breeds wrote:
> Hi All,
> I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le. I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on
On 22 September 2017 at 17:21, Paul Belanger wrote:
> On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> > "if DevStack gets custom images prepped to make its jobs
>> > run faster, won't
On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> > "if DevStack gets custom images prepped to make its jobs
> > run faster, won't Triple-O, Kolla, et cetera want the same and where
> > do we draw that line?). "
> >
> >
On 22 September 2017 at 11:45, Clark Boylan wrote:
> On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
>> Another, more revolutionary (for good or ill) alternative would be to
>> move gates to run Kolla instead of DevStack. We're working towards
>> registry of
On Fri, Sep 22, 2017, at 01:18 PM, Attila Fazekas wrote:
> The main offenders reported by devstack does not seams to explain the
> growth visible on OpenstackHealth [1] .
> The logs also stated to disappear which does not makes easy to figure
> out.
>
>
> Which code/infra changes can be related
On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
> Another, more revolutionary (for good or ill) alternative would be to
> move gates to run Kolla instead of DevStack. We're working towards
> registry of images, and we support most of openstack services now. If
> we enable mixed
On 22 September 2017 at 07:31, Jeremy Stanley wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> "if DevStack gets custom images prepped to make its jobs
>> run faster, won't Triple-O, Kolla, et cetera want the same and where
>> do we draw that line?). "
>>
On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> "if DevStack gets custom images prepped to make its jobs
> run faster, won't Triple-O, Kolla, et cetera want the same and where
> do we draw that line?). "
>
> IMHO we can try to have only one big image per distribution,
> where the
"if DevStack gets custom images prepped to make its jobs
run faster, won't Triple-O, Kolla, et cetera want the same and where
do we draw that line?). "
IMHO we can try to have only one big image per distribution,
where the packages are the union of the packages requested by all team,
minus the
On 2017-09-20 15:17:28 +0200 (+0200), Attila Fazekas wrote:
[...]
> The image building was the good old working solution and unless
> the image build become a super expensive thing, this is still the
> best option.
[...]
It became a super expensive thing, and that's the main reason we
stopped
On Wed, Sep 20, 2017 at 3:11 AM, Ian Wienand wrote:
> On 09/20/2017 09:30 AM, David Moreau Simard wrote:
>
>> At what point does it become beneficial to build more than one image per
>> OS
>> that is more aggressively tuned/optimized for a particular purpose ?
>>
>
> ... and
On 09/20/2017 09:30 AM, David Moreau Simard wrote:
At what point does it become beneficial to build more than one image per OS
that is more aggressively tuned/optimized for a particular purpose ?
... and we can put -dsvm- in the jobs names to indicate it should run
on these nodes :)
Older
On Tue, Sep 19, 2017 at 9:03 AM, Jeremy Stanley wrote:
>
> In order to reduce image sizes and the time it takes to build
> images, once we had local package caches in each provider we stopped
> pre-retrieving packages onto the images. Is the time spent at this
> stage mostly
On 09/19/2017 11:03 PM, Jeremy Stanley wrote:
On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]
The jobs does 120..220 sec apt-get install and packages defined
/files/debs/general are missing from the images before starting the job.
Is the time spent at this stage mostly
On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]
> Let's start with the first obvious difference compared to the old-time
> jobs.:
> The jobs does 120..220 sec apt-get install and packages defined
> /files/debs/general are missing from the images before starting the job.
>
> We
Hi David,
Thanks for looking into this. I do watch devstack changes every once in a
while but couldn't catch this one in time. The missing pmap -XX flag
problem has been there forever but it used to be non fatal. Now it is,
which is in principle a good change.
I will make sure that it passes
On 08/02/2017 07:17 AM, Sean Dague wrote:
The 3 node scenarios in Neutron (which are still experimental nv) are
typically failing to bring online the 3rd compute. In cells v2 you have
to explicitly add nodes to the cells. There is a nova-manage command
"discover-hosts" that takes all the compute
An issue with the xenserver CI was identified. Once we get this patch
in, and backported to ocata, it should also address a frequent grenade
multinode fail scenario which is plaguing the gate.
-Sean
On 08/02/2017 07:17 AM, Sean Dague wrote:
The 3 node scenarios in Neutron (which are
On Mon, Jun 19, 2017 at 08:17:53AM -0400, Davanum Srinivas wrote:
> Tony,
>
>
> On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds wrote:
> > On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
> >
> >> Awesome! thanks Tony, some kolla jobs do that for example,
Tony,
On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds wrote:
> On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
>
>> Awesome! thanks Tony, some kolla jobs do that for example, but i think
>> this job is a better one to key off of:
>>
On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
> Awesome! thanks Tony, some kolla jobs do that for example, but i think
> this job is a better one to key off of:
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
>
> Outline of the
On Sun, Jun 18, 2017 at 7:36 PM, Tony Breeds wrote:
> On Fri, Jun 16, 2017 at 03:59:22PM -0400, Davanum Srinivas wrote:
>> Mikhail,
>>
>> I have a TODO on my list - " adding a job that looks for new releases
>> and uploads them to tarballs periodically "
>
> If you point
On Fri, Jun 16, 2017 at 03:59:22PM -0400, Davanum Srinivas wrote:
> Mikhail,
>
> I have a TODO on my list - " adding a job that looks for new releases
> and uploads them to tarballs periodically "
If you point me to how things are added to that mirror I can work
towards that.
Tony.
Mikhail,
I have a TODO on my list - " adding a job that looks for new releases
and uploads them to tarballs periodically "
Thanks,
-- Dims
On Fri, Jun 16, 2017 at 3:32 PM, Mikhail Medvedev wrote:
> On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague wrote:
>> On
On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague wrote:
> On 06/15/2017 10:06 PM, Tony Breeds wrote:
>> Hi All,
>> I just push a review [1] to bump the minimum etcd version to
>> 3.2.0 which works on intel and ppc64le. I know we're pretty late in the
>> cycle to be making
On 06/15/2017 10:06 PM, Tony Breeds wrote:
> Hi All,
> I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le. I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on 3.1.x make it harder
On 11.05.2017 15:56, Markus Zoeller wrote:
> I'm working on a nova live-migration hook which configures and starts
> the nova-serialproxy service, runs a subset of tempest tests, and tears
> down the previously started service.
>
>https://review.openstack.org/#/c/347471/47
>
> After the
Thanks for the help Mikhail,
So just FYI for others, the etcd 3.2.0 is in RC1, We will get a full
set of arch(es) covered once that goes GA
Thanks,
Dims
On Wed, May 24, 2017 at 8:45 AM, Mikhail S Medvedev wrote:
>
> On 05/24/2017 06:59 AM, Sean Dague wrote:
>>
>> On
In the meanwhile I found some more information like [1].
I understood that devstack downloads the binaries from github as distros
don't have the latest version available. But the binaries for s390x are
not yet provided there. I opened a issue to figure out what would need
to be done to get the
On 05/24/2017 06:59 AM, Sean Dague wrote:
On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
> Hi together,
>
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
> Hi together,
>
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
>
> I pushed a patch to allow disabling
On Wed, May 3, 2017 at 6:14 PM, Sean Dague wrote:
> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
>>
>>> Screen is going away in Queens.
>>>
>>> Making the dev / test runtimes as similar as possible is really
These docs are great. As someone who has avoided learning systemd, I really
appreciate
the time folks put into making these docs. Well done.
-Dave
On Wed, May 3, 2017 at 7:14 PM, Sean Dague wrote:
> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's
This is the cantrip in devstack-gate that's collecting the logs into the
compat format:
https://github.com/openstack-infra/devstack-gate/blob/3a21366743d6624fb5c51588fcdb26f818fbd8b5/functions.sh#L794-L797
It's also probably worth dumping the whole journal in native format for
people to download
On 05/03/2017 06:45 PM, James Slagle wrote:
On Tue, May 2, 2017 at 9:19 AM, Monty Taylor wrote:
I absolutely cannot believe I'm saying this given what the change implements
and my general steaming hatred associated with it ... but this is awesome
work and a definite
On 05/03/2017 07:08 PM, Doug Hellmann wrote:
Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
Screen is going away in Queens.
Making the dev / test runtimes as similar as possible is really
important. And there is so much weird debt around trying to make screen
launch things
is dropping screen entirely in devstack? I
> > would argue that it is better to keep both screen and systemd, and let
> > users choose one of them based on their preference.
> >
> > Best regards,
> > Hongbin
> >
> >> -----Original Message-
>
On Tue, May 2, 2017 at 9:19 AM, Monty Taylor wrote:
> I absolutely cannot believe I'm saying this given what the change implements
> and my general steaming hatred associated with it ... but this is awesome
> work and a definite improvement over what existed before it. If
uld argue that it is better to keep both screen and systemd, and let users
> choose one of them based on their preference.
>
> Best regards,
> Hongbin
>
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: May-03-17 6:10 AM
>>
> From: Sean Dague [mailto:s...@dague.net]
> Sent: May-03-17 6:10 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> default
>
> On 05/02/2017 08:30 AM, Sean Dague wrote:
> > We started running systemd for devst
On 5/3/2017 5:09 AM, Sean Dague wrote:
If you run into any other issues please pop into #openstack-qa (or
respond to this email) and we'll try to work through them.
Something has definitely gone haywire in the cells v1 job since 5/1 and
the journal log handler:
On 05/02/2017 08:30 AM, Sean Dague wrote:
> We started running systemd for devstack in the gate yesterday, so far so
> good.
>
> The following patch (which will hopefully land soon), will convert the
> default local use of devstack to systemd as well -
> https://review.openstack.org/#/c/461716/.
On 05/02/2017 08:30 AM, Sean Dague wrote:
We started running systemd for devstack in the gate yesterday, so far so
good.
The following patch (which will hopefully land soon), will convert the
default local use of devstack to systemd as well -
https://review.openstack.org/#/c/461716/. It also
On Thu, Apr 13, 2017 at 9:01 PM, Sean Dague wrote:
> One of the many reasons for getting all our API services running wsgi
> under a real webserver is to get out of the custom ports for all
> services game. However, because of some of the limits of apache
> mod_wsgi, we really
This is all merged now. If you run into any issues with real WSGI
running, please poke up in #openstack-qa and we'll see what we can to to
get things ironned out.
-Sean
On 04/18/2017 07:19 AM, Sean Dague wrote:
> Ok, the patch series has come together now, and
>
Ok, the patch series has come together now, and
https://review.openstack.org/#/c/456344/ remains the critical patch.
This introduces a new global config option: "WSGI_MODE", which will be
either "uwsgi" or "mod_wsgi" (for the transition).
https://review.openstack.org/#/c/456717/6/lib/placement
On 4/5/2017 3:09 PM, Sean Dague wrote:
At the PTG clayg brought up an excellent question about what the
expected flow was to restart a bunch of services in devstack after a
code changes that impacts many of them (be it common code, or a
library). People had created a bunch of various screen
On Wed, Apr 5, 2017 at 1:30 PM, Andrea Frittoli
wrote:
>
>
> I just want to say thank you! to you clarkb clayg and everyone involved :)
> This is so much better!
>
> andreaf
>
>
Sean is throwing credit at me where none is due. IIRC I was both in the
room and in a
On Wed, Apr 5, 2017 at 9:14 PM Sean Dague wrote:
> At the PTG clayg brought up an excellent question about what the
> expected flow was to restart a bunch of services in devstack after a
> code changes that impacts many of them (be it common code, or a
> library). People had
On Thu, Oct 8, 2015 at 5:41 PM, Monty Taylor wrote:
> On 10/08/2015 07:13 PM, Christopher Aedo wrote:
>>
>> On Thu, Oct 8, 2015 at 9:38 AM, Sean M. Collins
>> wrote:
>>>
>>> Please see my response here:
>>>
>>>
>>>
On 03/02/2017 08:18 AM, Evgeny Antyshev wrote:
> Hello, devstack!
>
> I want to draw some attention to the fact, that install_libvirt function
> now (since https://review.openstack.org/#/c/438325 landed)
> only works for Centos 7, but not for other RHEL-based distributions:
> Virtuozzo and,
On Wed, 15 Feb 2017, Vega Cai wrote:
After digging into the log files, we find the reason is that, the placement
API Apache configuration file generated by DevStack doesn't grant necessary
access right to the placement API bin folder. In the first node where
Keystone is running, Apache
Sean Dague wrote:
> I'll probably still default this to python3, it is the future direction
> we are headed.
Works for me :)
--
Sean M. Collins
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
Hi Wasiq!
On Tue, Jan 17, 2017 at 1:34 PM, Wasiq Noor
wrote:
> Hello,
>
> I am Wasiq from Namal College Mianwali, Pakistan. Following the link:
> https://wiki.openstack.org/wiki/DisasterRecovery, I have developed a
> disaster recovery solution for Keystone for various
Excerpts from Sean Dague's message of 2017-01-17 11:50:39 -0500:
> On 01/17/2017 11:46 AM, Victor Stinner wrote:
> > Le 17/01/2017 à 17:36, Sean Dague a écrit :
> >> When putting the cli interface on it, I discovered python3's argparse
> >> has subparsers built in. This makes building up the cli
On 01/17/2017 11:46 AM, Victor Stinner wrote:
> Le 17/01/2017 à 17:36, Sean Dague a écrit :
>> When putting the cli interface on it, I discovered python3's argparse
>> has subparsers built in. This makes building up the cli much easier, and
>> removes pulling in a dependency for that. (Currently
Le 17/01/2017 à 17:36, Sean Dague a écrit :
When putting the cli interface on it, I discovered python3's argparse
has subparsers built in. This makes building up the cli much easier, and
removes pulling in a dependency for that. (Currently the only item in
requirements.txt is pbr). This is
: Wednesday, December 7, 2016 8:21 AM
To: OpenStack Development Mailing List (not for usage questions)
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [devstack]VersionConflict exception during
stack.sh - resend with explanation.
Thanks Tony for reply
This is Jenkins third part
Thanks Tony for reply
This is Jenkins third party. So it is doing unstack and removing repo and
stack again from master branch.
It has been running OK and then it was disable for about a month. Now when we
reenabled it, it has this versionconflict one-by-one. After I manually
installed
On Tue, Dec 06, 2016 at 08:02:23PM +, Wanjing Xu (waxu) wrote:
>
> Hi,
> My devstack had been OK a month ago. But recently it keeps having this
> VersionConflict error. If I manually install the required module version,
> it will move on but then it will error out at some other module. I
Thanks for replying . Sorry I did not write explanation in this email. So I
sent another one. Basically, there are too many of this kind of conflicts, I
already manually fixed about 8, still more. If I fix them all this time, it
may still happen down the road if somebody update constrains
Try to manually uninstall it first: sudo pip uninstall python-heatclient
Then launch devstack again. It will re-install the right version.
/ludovic
-Original Message-
From: Wanjing Xu (waxu) [mailto:w...@cisco.com]
Sent: December-06-16 2:43 PM
To: OpenStack Development Mailing List
On 6 December 2016 at 13:12, Jens Rosenboom wrote:
> 2016-12-06 7:16 GMT+01:00 Yipei Niu :
>> Hi, All,
>>
>> I failed installing devstack on Ubuntu. The detailed info of local.conf and
>> error is pasted in http://paste.openstack.org/show/591493/.
>>
>>
2016-12-06 7:16 GMT+01:00 Yipei Niu :
> Hi, All,
>
> I failed installing devstack on Ubuntu. The detailed info of local.conf and
> error is pasted in http://paste.openstack.org/show/591493/.
>
> BTW, python2.7 is installed in Ubuntu, and Python.h can be found under
>
Try:
apt-get install python-dev
BTW, this list is about openstack developers. For questions about
installation and usage, please post to openst...@lists.openstack.org
or try ask.openstack.org
Regards,
Qiming
On Tue, Dec 06, 2016 at 02:07:58PM +0800, Yipei Niu wrote:
> Hi, All,
>
> I
Sean M. Collins wrote:
> zhi wrote:
> > hi, all.
> >
> > I have a quick question about devstack.
> >
> > Can I specify OpenvSwitch version in local.conf when during the
> > installation of devstack? I want to OVS 2.6.0 in my devstack. Can I specify
> > it?
>
>
> The DevStack plugin for Neutron
zhi wrote:
> hi, all.
>
> I have a quick question about devstack.
>
> Can I specify OpenvSwitch version in local.conf when during the
> installation of devstack? I want to OVS 2.6.0 in my devstack. Can I specify
> it?
The DevStack plugin for Neutron has a way to build a specific OVS
version
On 11/18/2016 07:33 AM, zhi wrote:
hi, all.
I have a quick question about devstack.
Can I specify OpenvSwitch version in local.conf when during the
installation of devstack? I want to OVS 2.6.0 in my devstack. Can I
specify it?
Thanks
Zhi Chang
On 15 November 2016 at 15:04, Kevin Benton wrote:
> Hi all,
>
>
> Right now, we do something in devstack that does not reflect how
> deployments are normally done. We setup a route on the parent host to the
> private tenant network that routes through the tenant's router[1].
1 - 100 of 709 matches
Mail list logo