Hi All
We deprecated and stopped releasing the ceph charm a few cycles back in
preference to the split ceph-osd/ceph-mon charms; consider this official
notification of retirement!
Cheers
James
__
OpenStack Development
+1
On Wed, 5 Sep 2018 at 15:48 Billy Olsen wrote:
> Hi,
>
> I'd like to propose Felipe Reyes to join the OpenStack Charmers team as
> a core member. Over the past couple of years Felipe has contributed
> numerous patches and reviews to the OpenStack charms [0]. His experience
> and knowledge of
Hi All
As outgoing PTL I have the honour of organising the team dinner for the
Stein PTG this week.
I'm proposing Wednesday night at Russell's Smokehouse:
https://www.russellssmokehouse.com/
Let me know if you will be along (and if you have a +1) by end of today and
I'll make the
Hi Rikimaru
On Fri, 31 Aug 2018 at 11:28 Rikimaru Honjo
wrote:
> Hello,
>
> I'm planning to write a feature support matrix[1] of nova-lxd and
> add it to nova-lxd repository.
> A similar document exists as todo.txt[2], but this is old.
>
> Can I write it?
>
Yes please!
> If someone is
Hi Folks
We have a half day planned on Monday afternoon in Denver for the customary
discussion around OpenStack upgrades.
I've started a pad here:
https://etherpad.openstack.org/p/upgrade-sig-ptg-stein
Please feel free to add ideas and indicate if you will be participating in
the discussion.
Hi Folks
We have a half day planned on Monday afternoon in Denver for the customary
discussion around OpenStack upgrades.
I've started a pad here:
https://etherpad.openstack.org/p/upgrade-sig-ptg-stein
Please feel free to add ideas and indicate if you will be participating in
the discussion.
Hi All
I won't be standing for PTL of OpenStack Charms for this upcoming cycle.
Its been my pleasure to have been PTL since the project was accepted into
OpenStack, but its time to let someone else take the helm. I'm not going
anywhere but expect to have a bit of a different focus for this
Hi All
tl;dr we (the original founders) have not managed to invest the time to get
the Upgrades SIG booted - time to hit reboot or time to poweroff?
Since Vancouver, two of the original SIG chairs have stepped down leaving
me in the hot seat with minimal participation from either deployment
Hi All
tl;dr we (the original founders) have not managed to invest the time to get
the Upgrades SIG booted - time to hit reboot or time to poweroff?
Since Vancouver, two of the original SIG chairs have stepped down leaving
me in the hot seat with minimal participation from either deployment
Hi All
Unfortunately I can't make todays IRC meeting at 1600 UTC.
Should be back for next week, but I think we need todo some rescheduling to
fit better with other ops and dev meetings.
Cheers
James
___
OpenStack-operators mailing list
Hi All
Unfortunately I can't make todays IRC meeting at 1600 UTC.
Should be back for next week, but I think we need todo some rescheduling to
fit better with other ops and dev meetings.
Cheers
James
__
OpenStack
Hi All
Just a quick reminder that the Upgrade SIG IRC meeting will be held at 1600
UTC tomorrow (Tuesday) in #openstack-meeting-4.
If you're interested in helping improve the OpenStack upgrade experience be
sure to attend!
See [0] for previous meeting minutes and our standing agenda.
Regards
Hi All
Just a quick reminder that the Upgrade SIG IRC meeting will be held at 1600
UTC tomorrow (Tuesday) in #openstack-meeting-4.
If you're interested in helping improve the OpenStack upgrade experience be
sure to attend!
See [0] for previous meeting minutes and our standing agenda.
Regards
Hi All
Lujin, Lee and myself held the inaugural IRC meeting for the Upgrades SIG
this week (see [0]). Suffice to say that, due to other time pressures,
setup of the SIG has taken a lot longer than desired, but hopefully now we
have the ball rolling we can keep up a bit of momentum.
The Upgrades
Hi All
Lujin, Lee and myself held the inaugural IRC meeting for the Upgrades SIG
this week (see [0]). Suffice to say that, due to other time pressures,
setup of the SIG has taken a lot longer than desired, but hopefully now we
have the ball rolling we can keep up a bit of momentum.
The Upgrades
Hi All
Lujin, Lee and myself held the inaugural IRC meeting for the Upgrades SIG
this week (see [0]). Suffice to say that, due to other time pressures,
setup of the SIG has taken a lot longer than desired, but hopefully now we
have the ball rolling we can keep up a bit of momentum.
The Upgrades
Hi All
I finally got round to writing up my summary of the Upgrades session at the
PTG in Dublin (see [0]).
One outcome of that session was to form a new SIG centered on Upgrading
OpenStack - I'm pleased to announce that the SIG has been formally accepted!
The objective of the Upgrade SIG is to
Hi All
I finally got round to writing up my summary of the Upgrades session at the
PTG in Dublin (see [0]).
One outcome of that session was to form a new SIG centered on Upgrading
OpenStack - I'm pleased to announce that the SIG has been formally accepted!
The objective of the Upgrade SIG is to
Hi All
I finally got round to writing up my summary of the Upgrades session at the
PTG in Dublin (see [0]).
One outcome of that session was to form a new SIG centered on Upgrading
OpenStack - I'm pleased to announce that the SIG has been formally accepted!
The objective of the Upgrade SIG is to
Hi Torin
On Tue, 13 Mar 2018 at 13:59 Torin Woltjer
wrote:
> Thank you for the response James.
>
> I now have a couple of further questions regarding boot volume support on
> nova-lxd.
>
> Is this feature on the radar?
>
Not right now; I'm not entirely sure its
Hi Torin
On Mon, 12 Mar 2018 at 21:52, Torin Woltjer
wrote:
> Hello,
>
> I am looking to deploy an openstack cluster using LXD for compute and Ceph
> for storage, and I was running into some doubt as to whether this was
> possible; and doubt that nova-lxd was mature
Hi Aakash
On Sun, 11 Mar 2018 at 19:01 Aakash Kt wrote:
> Hi,
>
> I had previously put in a mail about the development for openstack-ovn
> charm. Sorry it took me this long to get back, was involved in other
> projects.
>
> I have submitted a charm spec for the above charm.
Hi All
We're not quite fully baked with Queens testing for the OpenStack charms
for this week so we're going to push back a week to the 8th March to allow
pre-commit functional testing updates to land.
Cheers
James
__
Hi Team
As I'm only managing to get to the PTG for Mon/Tues lets schedule a dinner
for Monday night; I'll sort out a venue - lemme know direct this week if
you'll be coming along!
Cheers
James
__
OpenStack Development
Hi Amit
(re-titled thread with scoped topics)
As Matt has already referenced, [0] is a good starting place for using the
nova-lxd driver.
On Tue, 20 Feb 2018 at 11:13 Amit Kumar wrote:
> Hello,
>
> I have a running OpenStack Ocata setup on which I am able to launch VMs.
>
Hi Amit
(re-titled thread with scoped topics)
As Matt has already referenced, [0] is a good starting place for using the
nova-lxd driver.
On Tue, 20 Feb 2018 at 11:13 Amit Kumar wrote:
> Hello,
>
> I have a running OpenStack Ocata setup on which I am able to launch VMs.
>
+1
On Wed, 14 Feb 2018 at 11:29 Liam Young wrote:
> Hi,
>
> I would like to propose that we do not support the notifications
> method for automatically creating DNS records in Queens+. This method
> for achieving Neutron integration has been superseded both upstream
>
+1
On Wed, 14 Feb 2018 at 11:29 Liam Young wrote:
> Hi,
>
> I would like to propose that we do not support the notifications
> method for automatically creating DNS records in Queens+. This method
> for achieving Neutron integration has been superseded both upstream
>
+1 from me
On Thu, 8 Feb 2018 at 18:23 Billy Olsen wrote:
> Dmitrii easily gets a +1 from me!
>
> On 02/08/2018 09:42 AM, Alex Kavanagh wrote:
> > Hi
> >
> > I'd like to propose Dmitrii Shcherbakov to join the launchpad
> > "OpenStack Charmers" team. He's done some
Hi Thierry
On Wed, 7 Feb 2018 at 11:42 Thierry Carrez wrote:
> Hi everyone,
>
> I was wondering if anyone would be interested in brainstorming the
> question of how to better align our release cycle and stable branch
> maintenance with the OpenStack downstream consumption
Hi All
It will (probably) come as no surprise that I'd like to announce
my candidacy for PTL of OpenStack Charms [0]!
We've made some good progress in the last cycle with some general
housekeeping across the charms set, including removal of untested
and generally unused database and
Hi Sandor
On Fri, 26 Jan 2018 at 13:32 Sandor Zeestraten wrote:
> Hey OpenStack Charmers,
>
> We have a Newton deployment on MAAS with 3 controller machines running all
> the usual OpenStack controller services in 3x HA with the hacluster charm
> in LXD containers. Now we'd
Hi Team
The Dublin PTG is not so far away now, so lets start on the agenda for our
Devroom:
https://etherpad.openstack.org/p/DUB-charms-devroom
We had a fairly formal agenda of design related topics in Denver for the
first day, and spent most of the second day mini-sprinting on various
Hi Thierry
On Thu, 18 Jan 2018 at 10:20 Thierry Carrez wrote:
> Hi everyone,
>
> Here is the proposed pre-allocated track schedule for the Dublin PTG:
>
>
>
Hi
Corey and I have been working through the first Queens milestones for
Ubuntu (and the UCA) - both networking-bagpipe and networking-bgpvpn don't
have published tarballs on tarballs.openstack.org.
Would it be possible to get those up? or we can cut our own from the git
repo for this milestone
Hi All
Apologies for the late creation of this pad:
https://etherpad.openstack.org/p/SYD-forum-charms-ops-feedback
if your planning on attending this session please add your name and any
topics for discussion!
Cheers
James
___
OpenStack-operators
Hi All
Apologies for the late creation of this pad:
https://etherpad.openstack.org/p/SYD-forum-charms-ops-feedback
if your planning on attending this session please add your name and any
topics for discussion!
Cheers
James
Hi Team
Whilst working on migrating to using Python 3 as the default charm
execution environment, I've hit upon a snag with presentation of data from
principle charms to the hacluster subordinate charm.
Specifically the data presented on the relation is simple str
representation of python dicts
On Tue, 3 Oct 2017 at 18:15 Doug Hellmann wrote:
> Excerpts from Jesse Pretorius's message of 2017-10-03 15:57:17 +:
> > On 10/3/17, 3:01 PM, "Doug Hellmann" wrote:
> >
> > >> Given that this topic has gone through several cycles of discussion
>
Hi All
Reminder that as this Thursday is the first Thursday of the month its
officially a bug triage/fix day!
Our new (23) queue is looking better than ever [0] so it would be great to
get that down to near 0.
After that focus should switch to High (62) and then Medium (174) bugs
already
Hi OpenStack Charm users
I've suggested a feedback session at the Forum in Sydney for any operators
using charms for deployment to provide direct feedback to the members of
the development team who are attending the Summit:
http://forumtopics.openstack.org/cfp/details/15
Hope to see some of
Hi All
Here’s a summary of the charm related discussion from PTG last week.
# Cross Project Discussions
## Skip Level Upgrades
This topic was discussed at the start of the week, in the context of
supporting upgrades across multiple OpenStack releases for operators. What
was immediately
Hi Team
I'm cancelling next Mondays IRC; we're (mostly) meeting for the PTG anyway
and can re-sync the following Monday.
Cheers
James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
On Mon, 4 Sep 2017 at 17:17 Sean McGinnis wrote:
> First - yay, awesome work. Glad to see this made available quickly.
> Nice work.
>
Ta
> But second - are you aware of deployment issues with Cinder API
> with these packages? I've had a report from someone on IRC that
>
(Apologies for the slight lag from Fridays actual release - end user email
failure...)
Hi All,
The Ubuntu OpenStack team at Canonical is pleased to announce the
general availability of OpenStack Pike for Ubuntu 16.04 LTS via the
Ubuntu Cloud Archive.
Details of the Pike release can be found at
Hi Aakash
On Tue, 29 Aug 2017 at 05:09 Aakash Kt wrote:
> Hello all,
> Resending this mail since I think there might have been some error
> sending it the last time.
>
>I am looking to develop an openstack bundle which uses OVN as the SDN.
> I have been reading :
Hi All
As I'm off in the backwaters of Scotland with zero chance of any internet
access for the next two weeks, I'm delegating my PTL responsibilities to
the capable Ryan Beisner until my return.
I'll be back just after the charms final freeze in the leadup to the charms
release at the start of
Hi All
I've started a planning etherpad for the PTG next month; feel free to add
topics you want to discuss/sprint on during the week.
https://etherpad.openstack.org/p/ptg-queens-charms
Cheers
James
__
OpenStack
Hi All
I would like to announce my candidacy for PTL of the OpenStack Charms
project
for the Queens development cycle.
We've made some good progress during the Pike cycle in terms of improving
documentation, with a new deployment guide in the works which should make
things
much easier for new
Morning
On Thu, 27 Jul 2017 at 23:58 Michael Still wrote:
> Hi,
>
> I'm cc'ing openstack-dev because your email is the same as the comment you
> made on the relevant review, and I think getting visibility with the wider
> Nova team is a good idea.
>
> Unfortunately this is a
Hi Chris
On Tue, 27 Jun 2017 at 02:57 Chris Apsey wrote:
> James,
>
> Bug report submitted.
>
Thanks (https://bugs.launchpad.net/cloud-archive/+bug/1700677 for reference
here).
Fix committed to package git repository; however we have some stable
release updates
Tweaking subject line a bit...
On Mon, 26 Jun 2017 at 02:27 Chris Apsey wrote:
> All,
>
> Doing some testing prior to moving to Ocata from Newton, and I ran into
> this issue when trying to get the nova placement api set up:
>
> ==
>
On Wed, 21 Jun 2017 at 20:08 Jason Dillaman wrote:
> On Wed, Jun 21, 2017 at 12:32 PM, Jon Bernard wrote:
> > I suspect you'd want to enable layering at minimum.
>
> I'd agree that layering is probably the most you'd want to enable for
> krbd-use cases
Hi Ian
On Fri, 9 Jun 2017 at 07:57 Ian Wienand wrote:
> Hi,
>
> If you know of someone in control of whatever is trying to use this
> account, running on 91.189.91.27 (a canonical IP), can you please turn
> it off. It's in a tight loop failing to connect to gerrit, which
>
Hi Thomas
On Tue, 23 May 2017 at 09:14 wrote:
> Hi James,
>
> FYI, exabgp 4.0.0 has been released and this release can be package to
> satisfy networking-bagpipe needs.
> A request for adding exabgp as a proper OpenStack requirement is in
> flight:
Hi Team
Just a quick reminder that this Thursday is charm bug day!
Please focus on triage and resolution of bugs across the openstack charms -
the new bugs URL is in the topic in #openstack-charms on Freenode IRC.
Happy bug hunting!
Cheers
James
Hi All
The OpenStack summit is nearly upon us and for this summit we're running a
project onboarding session on Monday at 4.40pm in MR-105 (see [0] for full
details) for anyone who wants to get started either using the OpenStack
Charms or contributing to the development of the Charms,
The
Hi Thomas
On Thu, 27 Apr 2017 at 17:03 wrote:
> Indeed, we have moved from shipping a fork of an old exabgp in bagpipe-bgp
> (a.k.a."vendoring", i.e. baaad...) to using upstream, but we have
> dependencies on exabgp development branch.
>
> I've been in touch with ExaBGP
Hi
I'm working on the wider networking-* packages we have in Ubuntu for Pike
milestone 1 and noticed that exabgp is currently being pulled in from the
master branch of exabgp; any ideas when you might be able to switch to a
released version of exabgp?
Cheers
James
Hi Clark
On Tue, 4 Apr 2017 at 00:08 Clark Boylan wrote:
> One of the major sets of issues currently affecting gate testing is
> Libvirt stability. Elastic-recheck is tracking Libvirt crashes for us
> and they happen frequently [0][1][2]. These issues appear to only affect
On Tue, 4 Apr 2017 at 14:38 Daniel P. Berrange wrote:
> On Mon, Apr 03, 2017 at 04:06:53PM -0700, Clark Boylan wrote:
> > Hello,
> >
> > One of the major sets of issues currently affecting gate testing is
> > Libvirt stability. Elastic-recheck is tracking Libvirt crashes for
On Wed, 22 Mar 2017 at 15:10 Corey Bryant
wrote:
[...]
> +1
>
> I have full confidence in Dmitrii. He's already a great asset to snaps
> and will be great to have as a core reviewer.
>
And then there were three...
welcome to the core reviewers team Dmitrii!
Cheers
Hi Snappers
Dmitrii did some good work on the ceilometer snap and has been providing
reviews and feedback of other changes in the queue over the last few months
as well has hanging out and being a sounding board/answering questions in
#openstack-snaps.
He's also working out how to get libvirt
Hi Kendall
On Wed, 15 Mar 2017 at 18:22, Kendall Nelson wrote:
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to
Hi All
Just a reminder that we'll be running our regular bug day on the 2nd March
(this coming thursday).
Focus, as always, is to touch new bugs and work through in priority order
for any fixes!
Please co-ordinate any activity in the #openstack-charms channel!
Cheers
James
Hi All
I'm pleased to announce the 17.02 release of the OpenStack Charms.
In addition to 120 bug fixes across the charms and support for the Ocata
OpenStack release, there are new charms for Ceph FS and Manila and to
support integration of Keystone with LDAP/Active Directory leveraging
domain
Hi All
I'm pleased to announce the 17.02 release of the OpenStack Charms.
In addition to 120 bug fixes across the charms and support for the Ocata
OpenStack release, there are new charms for Ceph FS and Manila and to
support integration of Keystone with LDAP/Active Directory leveraging
domain
Hi Team
As most people are at the PTG today, we'll skip todays team IRC meeting.
Cheers
James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
Hi All
Snap packages for rally (0.7.0), tempest (14.0.0) and openstackclients
(currently Newton aligned) are available for use; the associated git
repositories can also be found under the openstack org on github (see [0],
[1], [2]). Git repos also contain details of use of each snap.
You can
Hi Team
We're only a few weeks off the PTG, so I think its about time we started
the ball rolling on planning our time out over the Monday/Tuesday.
I've created an etherpad so we can brainstorm a schedule for the two days:
https://etherpad.openstack.org/p/openstack-charms-ptg-pike
If you're
Hi All
Here are the links from todays Charms IRC meeting:
Agenda:
https://etherpad.openstack.org/p/openstack-charms-weekly-meeting-20170129
Minutes:
http://eavesdrop.openstack.org/meetings/charms/2017/charms.2017-01-30-10.03.html
Minutes (text):
Hi Team
Just a quick reminder that next Thursday marks our second bug day for the
year.
Please focus on triage and resolution of bugs across the openstack charms -
the new bugs URL is in the topic in #openstack-charms on Freenode IRC.
Happy bug hunting!
Cheers
James
Hi All
I would like to announce my candidacy for PTL of the OpenStack Charms
project.
Over the Ocata cycle, we've been incubating the community of developers
around
the Charms, with new charms for Murano, Trove, Mistral and CloudKitty all
due
to be included in the release in February.
We've
Hi All
I’ve been working with a few folk over the last few months to see if snaps
(see [0]) might be a good alternative approach to packaging and
distribution of OpenStack.
As OpenStack projects are Python based, producing snaps has been relatively
trivial with the snapcraft python plugin, which
Hi All
Just a quick reminder that this Thursday (5th January) is our first bug day
for charms of the year.
Objective is to blast through the un-triaged bug backlog assigning some
initial priorities and then fixup as many bugs as possible!
Please co-ordinate via #openstack-charms so we don't all
Hi Julien
On Tue, 3 Jan 2017 at 09:17 Julien Danjou wrote:
> > In the current ceilometer charms, we retain all ceilometer data
> > indefinitely; the TTL can be overridden by users using configuration
> > options, but to me it feels like maybe retaining all data forever by
>
Hi All
In the current ceilometer charms, we retain all ceilometer data
indefinitely; the TTL can be overridden by users using configuration
options, but to me it feels like maybe retaining all data forever by
default is a trip hazard to users, and the actions required to backout of a
full DB for
Hi Team
Our last (and only) bug day was quite popular/successful, so lets try and
have one focus day on bugs a month.
So from January, the first Thursday of the month will officially be 'Charm
Bug Squash Day'.
The objective of the day is to triage any untouched bugs, sift through
triaged bugs
Hi All
As we're approaching a period where quite a few people will be having time
off, I'm cancelling the IRC meetings on Mondays (1000 and 1700 on alternate
weeks) until the 9th January at 1700 UTC - at which point we'll resume
normal service, with the next meeting after that at 1000 UTC on the
Hi Folks
We have an organised bug day prior to the OpenStack Summit in Barcelona; I
felt that this focused everyone onto collaborating on bugs in a good way,
and gave us a great checkpoint on what the key issues are that people are
hitting and reporting back on the charms.
I'd like to proposed
Hi James
On Thu, 17 Nov 2016 at 20:05 James Beedy wrote:
> Is there a specific reason the barbican charm doesn't have the
> os-{internal,private}-hostname config params?
>
The layers and charms.openstack don't currently have support for the
os-*-hostname configuration
Hi All
After some extended travel after the summit, I've finally got round to
writing a summary of the two design summit sessions we held in Barcelona
(see [0] and [1] for full notes).
1) Cross Charm Initiatives
We had general agreement to switch all charms to Python 3, dropping charm
support
Hi Charmers
I've received a draft version of our project logo, using the mascot we
selected together. A final version (and some cool swag) will be ready for
us before the Project Team Gathering in February. Before they make our logo
final, they want to be sure we're happy with our mascot.
We can
Hi Charmers
The Ocata Design Summit is upon us; we have fishbowl sessions on Wednesday
afternoon, and workrooms first thing in Friday morning:
https://www.openstack.org/summit/barcelona-2016/summit-schedule/global-search?t=Charms
I've created pads for the fishbowl sessions:
Hi All,
We are pleased to announce the 16.10 release of the OpenStack Charms.
Highlights include:
* OpenStack Newton support
* Ubuntu 16.10 support
* Juju 2.0 application version support
* AppArmor support
For more details please see the full release notes here:
Hi All
Following on from yesterdays charm release (see [0]), master branches are
now open for general development across the OpenStack Charms.
Thanks again to everyone who contributed to the 16.10 charm release!
Cheers
James
[0] http://docs.openstack.org/developer/charm-guide/1610.html
Hi Charmers
Just a reminder that as of the end of yesterday, we're in feature freeze
for the 16.10 charm release in two weeks time; if you have a compelling
feature that you feel should be considered, please email openstack-dev with
a request for a freeze exception with details of the change
Hi Team
We running up towards feature freeze on the 29th of September for our charm
release mid October.
So that we have an accurate view of bugs across the OpenStack Charms in
advance of that date, I'm proposing we have a bug day next Thursday (22nd
September).
Anyone is welcome to participate
t;
>
> On Tue, Sep 13, 2016 at 8:46 PM, James Page <james.p...@ubuntu.com> wrote:
>
>> Hi Neil
>>
>> On Tue, 13 Sep 2016 at 20:43 Neil Jerram <n...@tigera.io> wrote:
>>
>>> Should it be possible to run two OpenStack charm units, t
Hi Neil
On Tue, 13 Sep 2016 at 20:43 Neil Jerram wrote:
> Should it be possible to run two OpenStack charm units, that both use
> haproxy to load balance their APIs, on the same machine? Or is there some
> doc somewhere that says that a case like that should use separate
Hi
I would like to announce my candidacy for PTL of the OpenStack Charms
project (see [0]).
I'm the incumbent and first PTL of this project, which joined the OpenStack
project this year; Right now we have around 30 charms for deploying
various parts of OpenStack, with an active core team 100%
On Fri, 9 Sep 2016 at 01:17 Emilien Macchi wrote:
[...]
> 3) Disable Ironic testing on Ubuntu. Packages are broken in recent
> Newton upgrade. They are working on it.
>
Ironic packaging issue fixed and released to newton-updates; that will
teach me to spend a morning tidying
On Wed, 7 Sep 2016 at 14:30 Ian Cordasco wrote:
[...]
> https://review.openstack.org/366631
> >
> > The combination of oslo.context 2.9.0 + positional 1.0.1 (which is the
> > current minimum requirement) results in various unit test failures in
> > barbican, related to
On Wed, 7 Sep 2016 at 14:51 Ian Cordasco wrote:
> Hey James!
>
> There's already a discussion about this with the [requirements] and
> [ffe] tags by Matthew Thode. Could we centralize the discussion on
> that thread please?
>
Of course - missed that whilst eating lunch!
Hi
I'm out of contact from everything electronic next week; back on the 22nd
August.
David Ames (thankyou!) will be covering any Charms PTL related matters in
my absence and generally keeping the wheels on development.
Cheers
James
On Mon, 8 Aug 2016 at 18:19 Ryan Beisner wrote:
> Greetings,
>
> I would like to nominate David Ames for addition to the
> charms-release team, as he has played a valuable role in the charm release
> processes. This change will grant privileges such as new stable
Hi Ihar
On Mon, 8 Aug 2016 at 10:37 Ihar Hrachyshka wrote:
[...]
> Any update on re-tagging the mitaka release of networking-l2gw with the
> > correct semantic version? I'm working on packaging updates for Debian
> > and Ubuntu, and don't really want to push in 2016.1.0,
Hi Networking SFC team
I'm trying to get a view on a few Neutron related projects with the
objective of lining up a release in Ubuntu and Debian alongside OpenStack
Newton of vmware-nsx, networking-l2gw, networking-sfc and tap-as-a-service.
What are your plans for Newton? I can push snapshots
Hi Carl
On Thu, 19 May 2016 at 21:51 Carl Baldwin wrote:
> On Thu, May 19, 2016 at 7:09 AM, Doug Hellmann
> wrote:
> > We have the same issue with version numbers regressing no matter when we
> > cut the next release, so it's up to the team. It might
Hi Andrey
On Wed, 3 Aug 2016 at 14:35 Andrey Pavlov wrote:
> Instead of adding one more relation to glance, I can relate my charm to
> new relation 'cinder-volume-service'
>
almost
>
> So there is no more changes in the glance charm.
> And additional in my charm will be
1 - 100 of 164 matches
Mail list logo