On Tue, Apr 12, 2016 at 11:08:28PM +0200, Gabriele Cerami wrote:
> On Fri, 2016-04-08 at 16:18 +0100, Steven Hardy wrote:
>
> > Note we're not using devtest at all anymore, the developer script
> > many
> > folks use is tripleo.sh:
>
> So, I followed the flow of the gate jobs starting from
On Fri, 2016-04-08 at 16:18 +0100, Steven Hardy wrote:
> Note we're not using devtest at all anymore, the developer script
> many
> folks use is tripleo.sh:
So, I followed the flow of the gate jobs starting from jenkins builder
script, and it seems like it's using devtest (or maybe something I
Hi, Andrey
I've checked this option - to use rally for configuring and running tempest
test.
Although it looks like great choice, unfortunately a few issues and bugs
makes it not useful right now. For example it can not work with current
public networks and can not create new ones, so that
On 7 April 2016 at 22:03, Gabriele Cerami wrote:
> Hi,
>
> I'm trying to find an entry point to join the effort in TripleO CI.
Hi Gabriele, welcome aboard
> I studied the infrastructure and the scripts, but there's still something I'm
> missing.
> The last step of studying
Hi Gabriele,
On Thu, Apr 07, 2016 at 05:03:33PM -0400, Gabriele Cerami wrote:
> Hi,
>
> I'm trying to find an entry point to join the effort in TripleO CI.
> I studied the infrastructure and the scripts, but there's still something I'm
> missing.
> The last step of studying the complex
Hi Sagi,
On Thu, Apr 7, 2016 at 5:56 PM, Sagi Shnaidman wrote:
> Hi, all
>
> I'd like to discuss the topic about how do we configure tempest in CI jobs
> for TripleO.
> I have currently two patches:
> support for tempest: https://review.openstack.org/#/c/295844/
> actually
Hi,
I'm trying to find an entry point to join the effort in TripleO CI.
I studied the infrastructure and the scripts, but there's still something I'm
missing.
The last step of studying the complex landscape of TripleO CI and the first to
start contributing
is being able to reproduce failures in
Hi, all
I'd like to discuss the topic about how do we configure tempest in CI jobs
for TripleO.
I have currently two patches:
support for tempest: https://review.openstack.org/#/c/295844/
actually run of tests: https://review.openstack.org/#/c/297038/
Right now there is no upstream tool to
On Tue, 2016-03-08 at 17:58 +, Derek Higgins wrote:
> On 7 March 2016 at 18:22, Ben Nemec wrote:
> >
> > On 03/07/2016 11:33 AM, Derek Higgins wrote:
> > >
> > > On 7 March 2016 at 15:24, Derek Higgins
> > > wrote:
> > > >
> > > > On 6 March 2016
On 9 March 2016 at 07:08, Richard Su wrote:
>
>
> On 03/08/2016 09:58 AM, Derek Higgins wrote:
>>
>> On 7 March 2016 at 18:22, Ben Nemec wrote:
>>>
>>> On 03/07/2016 11:33 AM, Derek Higgins wrote:
On 7 March 2016 at 15:24, Derek Higgins
On 03/08/2016 09:58 AM, Derek Higgins wrote:
On 7 March 2016 at 18:22, Ben Nemec wrote:
On 03/07/2016 11:33 AM, Derek Higgins wrote:
On 7 March 2016 at 15:24, Derek Higgins wrote:
On 6 March 2016 at 16:58, James Slagle
On Tue, Mar 8, 2016 at 12:58 PM, Derek Higgins wrote:
> We discussed this at today's meeting but never really came to a
> conclusion except to say most people wanted to try it. The main
> objection brought up was that we shouldn't go dropping the nonha job,
> that isn't what I
On 03/08/2016 11:58 AM, Derek Higgins wrote:
> On 7 March 2016 at 18:22, Ben Nemec wrote:
>> On 03/07/2016 11:33 AM, Derek Higgins wrote:
>>> On 7 March 2016 at 15:24, Derek Higgins wrote:
On 6 March 2016 at 16:58, James Slagle
On 7 March 2016 at 18:22, Ben Nemec wrote:
> On 03/07/2016 11:33 AM, Derek Higgins wrote:
>> On 7 March 2016 at 15:24, Derek Higgins wrote:
>>> On 6 March 2016 at 16:58, James Slagle wrote:
On Sat, Mar 5, 2016 at 11:15 AM,
On 03/07/2016 11:33 AM, Derek Higgins wrote:
> On 7 March 2016 at 15:24, Derek Higgins wrote:
>> On 6 March 2016 at 16:58, James Slagle wrote:
>>> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi wrote:
I'm kind of hijacking
On 03/07/2016 12:00 PM, Derek Higgins wrote:
> On 7 March 2016 at 12:11, John Trowbridge wrote:
>>
>>
>> On 03/06/2016 11:58 AM, James Slagle wrote:
>>> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi wrote:
I'm kind of hijacking Dan's e-mail but I
On 7 March 2016 at 12:11, John Trowbridge wrote:
>
>
> On 03/06/2016 11:58 AM, James Slagle wrote:
>> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi wrote:
>>> I'm kind of hijacking Dan's e-mail but I would like to propose some
>>> technical improvements to
On 7 March 2016 at 15:24, Derek Higgins wrote:
> On 6 March 2016 at 16:58, James Slagle wrote:
>> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi wrote:
>>> I'm kind of hijacking Dan's e-mail but I would like to propose some
>>>
On 6 March 2016 at 16:58, James Slagle wrote:
> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi wrote:
>> I'm kind of hijacking Dan's e-mail but I would like to propose some
>> technical improvements to stop having so much CI failures.
>>
>>
>> 1/ Stop
On Sat, 2016-03-05 at 11:15 -0500, Emilien Macchi wrote:
> I'm kind of hijacking Dan's e-mail but I would like to propose some
> technical improvements to stop having so much CI failures.
>
>
> 1/ Stop creating swap files. We don't have SSD, this is IMHO a
> terrible
> mistake to swap on files
On 03/06/2016 11:58 AM, James Slagle wrote:
> On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi wrote:
>> I'm kind of hijacking Dan's e-mail but I would like to propose some
>> technical improvements to stop having so much CI failures.
>>
>>
>> 1/ Stop creating swap files. We
On 03/06/2016 05:58 PM, James Slagle wrote:
On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi wrote:
I'm kind of hijacking Dan's e-mail but I would like to propose some
technical improvements to stop having so much CI failures.
1/ Stop creating swap files. We don't have
On Sat, Mar 5, 2016 at 11:15 AM, Emilien Macchi wrote:
> I'm kind of hijacking Dan's e-mail but I would like to propose some
> technical improvements to stop having so much CI failures.
>
>
> 1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
> mistake to
I'm kind of hijacking Dan's e-mail but I would like to propose some
technical improvements to stop having so much CI failures.
1/ Stop creating swap files. We don't have SSD, this is IMHO a terrible
mistake to swap on files because we don't have enough RAM. In my
experience, swaping on non-SSD
Just a quick update about the CI outage today and yesterday. Turns out
our jobs weren't running due to a bad Keystone URL (it was pointing to
localhost:5000 instead of our public SSL endpoint).
We've now fixed that issue and I'm told that as soon as Infra restarts
nodepool (they cache the
On 11/04/15 14:02, Dan Prince wrote:
Looks like our SSL certificate has expired for the currently active CI
cloud. We are working on getting a new one generated and installed.
Until then CI jobs won't get processed.
A new cert has been installed in the last few minutes and ZUUL has
started
Looks like our SSL certificate has expired for the currently active CI
cloud. We are working on getting a new one generated and installed.
Until then CI jobs won't get processed.
Dan
__
OpenStack Development Mailing List
Tl;dr tripleo ci is back up and running, see below for more
On 21/03/15 01:41, Dan Prince wrote:
Short version:
The RH1 CI region has been down since yesterday afternoon.
We have a misbehaving switch and have file a support ticket with the
vendor to troubleshoot things further. We hope to
Short version:
The RH1 CI region has been down since yesterday afternoon.
We have a misbehaving switch and have file a support ticket with the
vendor to troubleshoot things further. We hope to know more this
weekend, or Monday at the latest.
Long version:
Yesterday afternoon we started seeing
It's been a bad week for CI, mostly due to setuptools.
Cores, please review https://review.openstack.org/#/c/144184/ immediately,
as CI is currently broken.
2014-12-19 - Neutron committed a change which had a symlink. This broke
pip install neutron, which broke CI for around 6 hours.
2014-12-22
Two major CI outages this week
2014-12-12 - 2014-12-15 - pip install MySQL-python failing on fedora
- There was an updated mariadb-devel package, which caused pip install of
the python bindings to fail as gcc could not build using the provided
headers.
- derekh put in a workaround on the 15th
Excerpts from James Polley's message of 2014-12-19 17:10:41 +:
Two major CI outages this week
2014-12-12 - 2014-12-15 - pip install MySQL-python failing on fedora
- There was an updated mariadb-devel package, which caused pip install of
the python bindings to fail as gcc could not build
Resending with correct subject tag. Never send email before coffee.
On Fri, Dec 12, 2014 at 9:33 AM, James Polley j...@jamezpolley.com wrote:
In the week since the last email we've had no major CI failures. This
makes it very easy for me to write my first CI report.
There was a brief period
On 04/12/14 13:37, Dan Prince wrote:
On Thu, 2014-12-04 at 11:51 +, Derek Higgins wrote:
A month since my last update, sorry my bad
since the last email we've had 5 incidents causing ci failures
26/11/2014 : Lots of ubuntu jobs failed over 24 hours (maybe half)
- We seem to suffer any
On Thu, 2014-12-04 at 11:51 +, Derek Higgins wrote:
A month since my last update, sorry my bad
since the last email we've had 5 incidents causing ci failures
26/11/2014 : Lots of ubuntu jobs failed over 24 hours (maybe half)
- We seem to suffer any time an ubuntu mirror isn't in sync
On 11/27/2014 02:23 PM, Derek Higgins wrote:
On 27/11/14 10:21, Duncan Thomas wrote:
I'd suggest starting by making it an extra job, so that it can be
monitored for a while for stability without affecting what is there.
we have to be careful here, adding an extra job for this is probably the
On 11/27/2014 07:23 AM, Derek Higgins wrote:
On 27/11/14 10:21, Duncan Thomas wrote:
I'd suggest starting by making it an extra job, so that it can be
monitored for a while for stability without affecting what is there.
we have to be careful here, adding an extra job for this is probably the
I'd suggest starting by making it an extra job, so that it can be monitored
for a while for stability without affecting what is there.
I'd be supportive of making it the default HA job in the longer term as
long as the LVM code is still getting tested somewhere - LVM is still the
reference
On 27/11/14 10:21, Duncan Thomas wrote:
I'd suggest starting by making it an extra job, so that it can be
monitored for a while for stability without affecting what is there.
we have to be careful here, adding an extra job for this is probably the
safest option but tripleo CI resources are a
hi there,
while working on the TripleO cinder-ha spec meant to provide HA for
Cinder via Ceph [1], we wondered how to (if at all) test this in CI, so
we're looking for some feedback
first of all, shall we make Cinder/Ceph the default for our (currently
non-voting) HA job?
Hi All,
The week before last saw no problems with CI
But last week we had 3 separate problems causing tripleo CI tests to
fail until they were dealt with
1. pypi.openstack.org is no longer being maintained, which we were using
in tripleo-ci, we've now moved to pypi.python.org
2. nova started
Hi All,
Nothing to report since the last report, 2 weeks of no breakages.
thanks,
Derek.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi All,
There was 1 CI event last week,
regression in ironic https://bugs.launchpad.net/tripleo/+bug/1375641
All ironic tripleo CI tests failed for about 12 hours
For more info see https://etherpad.openstack.org/p/tripleo-ci-breakages
thanks,
Derek.
Hi All,
On Wednesday, I started keeping a short summary of issues hit by
tripleo CI, so in time we can look back to properly assess the frequency
or problems along with their causes.
The list will be maintained here (most recent at the top)
- Original Message -
Well we probably need some backwards compat glue to keep deploying supported
versions. More on that in the spec I'm drafting.
A spec around deploying multiple versions of the overcloud? If so, great :-)
Re: https://bugs.launchpad.net/tripleo/+bug/1330735 and
Not a great week for TripleO CI. We had 3 different failures related to:
Nova [1]: we were using a deprecated config option
Heat [2]: missing heat data obtained from the Heat CFN API
Neutron [3]: a broken GRE overlay network setup
The TripleO check jobs look to be running stable again today
- Original Message -
Not a great week for TripleO CI. We had 3 different failures related to:
Nova [1]: we were using a deprecated config option
Heat [2]: missing heat data obtained from the Heat CFN API
Neutron [3]: a broken GRE overlay network setup
The last two are bugs, but
On Jun 20, 2014 1:52 PM, Charles Crouch ccro...@redhat.com wrote:
- Original Message -
Not a great week for TripleO CI. We had 3 different failures related to:
Nova [1]: we were using a deprecated config option
Heat [2]: missing heat data obtained from the Heat CFN API
- Original Message -
On Jun 20, 2014 1:52 PM, Charles Crouch ccro...@redhat.com wrote:
- Original Message -
Not a great week for TripleO CI. We had 3 different failures related to:
Nova [1]: we were using a deprecated config option
Heat [2]:
On Fri, Jun 20, 2014 at 2:15 PM, Charles Crouch ccro...@redhat.com wrote:
- Original Message -
On Jun 20, 2014 1:52 PM, Charles Crouch ccro...@redhat.com wrote:
- Original Message -
Not a great week for TripleO CI. We had 3 different failures related
Excerpts from Charles Crouch's message of 2014-06-20 13:51:49 -0700:
- Original Message -
Not a great week for TripleO CI. We had 3 different failures related to:
Nova [1]: we were using a deprecated config option
Heat [2]: missing heat data obtained from the Heat CFN API
On Fri, 2014-06-20 at 16:51 -0400, Charles Crouch wrote:
- Original Message -
Not a great week for TripleO CI. We had 3 different failures related to:
Nova [1]: we were using a deprecated config option
Heat [2]: missing heat data obtained from the Heat CFN API
Neutron [3]:
Well we probably need some backwards compat glue to keep deploying
supported versions. More on that in the spec I'm drafting.
On 21 Jun 2014 12:26, Dan Prince dpri...@redhat.com wrote:
On Fri, 2014-06-20 at 16:51 -0400, Charles Crouch wrote:
- Original Message -
Not a great week
Hi, the HP1 tripleo test cloud region has been systematically failing
and rather than flogging it along we're going to strip it down and
bring it back up with some of the improvements that have happened over
the last $months, as well as changing the undercloud to deploy via
Ironic and other
Latest outage was due to nodepool having a stuck TCP connection to the
HP1 region again.
I've filed https://bugs.launchpad.net/python-novaclient/+bug/1323862
about it. If someone were to pick this up and run with it it would be
super useful.
-Rob
On 24 May 2014 05:01, Clint Byrum
Hi Clint,
Please count me in.
Cristian
On 22/05/14 19:24, Clint Byrum cl...@fewbar.com wrote:
Ahoy there, TripleO interested parties. In the last few months, we've
gotten a relatively robust, though not nearly complete, CI system for
TripleO. It is a bit unorthodox, as we have a strong desire
I forgot to include a link explaining our cloud:
https://wiki.openstack.org/wiki/TripleO/TripleOCloud
Thanks!
Excerpts from Clint Byrum's message of 2014-05-22 15:24:05 -0700:
Ahoy there, TripleO interested parties. In the last few months, we've
gotten a relatively robust, though not nearly
Ahoy there, TripleO interested parties. In the last few months, we've
gotten a relatively robust, though not nearly complete, CI system for
TripleO. It is a bit unorthodox, as we have a strong desire to ensure
PXE booting works, and that requires us running in our own cloud.
We have this working,
Swift changed the permissions on the swift ring object file which
broke tripleo deployments of swift. (root:root mode 0600 files are not
readable by the 'swift' user). We've got a patch in flight
(https://review.openstack.org/#/c/83645/) that will fix this, but
until that lands please don't spend
On 25/02/14 00:08, Robert Collins wrote:
Today we had an outage of the tripleo test cloud :(.
tl;dr:
- we were down for 14 hours
- we don't know the fundamental cause
- infra were not inconvenienced - yaaay
- its all ok now.
Looks like we've hit the same problem again tonight, I've
o
Looking into it now.
On 27 Feb 2014 15:56, Derek Higgins der...@redhat.com wrote:
On 25/02/14 00:08, Robert Collins wrote:
Today we had an outage of the tripleo test cloud :(.
tl;dr:
- we were down for 14 hours
- we don't know the fundamental cause
- infra were not inconvenienced
On 27 February 2014 15:55, Derek Higgins der...@redhat.com wrote:
On 25/02/14 00:08, Robert Collins wrote:
Today we had an outage of the tripleo test cloud :(.
tl;dr:
- we were down for 14 hours
- we don't know the fundamental cause
- infra were not inconvenienced - yaaay
- its all ok
On 27 February 2014 20:35, Robert Collins robe...@robertcollins.net wrote:
'
Checking new instance connectivity next.
DHCP is functional and no cloud-init errors, so we should be fully up.
-Rob
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud
Today we had an outage of the tripleo test cloud :(.
tl;dr:
- we were down for 14 hours
- we don't know the fundamental cause
- infra were not inconvenienced - yaaay
- its all ok now.
Read on for more information, what little we have.
We don't know exactly why it happened yet, but the
301 - 364 of 364 matches
Mail list logo