Re: [OpenStack-Infra] [I18n] Regarding Zanata upgrade plan to 3.9.6 with Xenial: Help is needed

2017-01-13 Thread Ian Y. Choi

Clark Boylan wrote on 1/14/2017 6:31 AM:

On Thu, Jan 12, 2017, at 02:36 PM, Ian Y. Choi wrote:

- I18n team completed tests with current Zanata (3.7.3) with Xenial [1],
but found one error
mentioned in [2]. It seems that more memory size allocation for
Zanata with Java 8 is needed.
Could you upgrade the memory size for translate-dev server first to
test again?

How much memory do we think will be necessary? The current instance is
an 8GB RAM instance. As long as we have quota for it, there should not
be a problem redeploying on a bigger instance.
Regarding this, I have recently seen another Zanata server: 
https://translate.zanata.org/ already uses Zanata 3.9.6.


Alex, could you share how much memory is being used for this server?
(server memory size, jvm heap size, and so on). I think the server will 
use Java 8 (OS will be Fedora, right?).
It might provide good estimation of how much memory is needed for 
translate-dev and also translate (in future) with us I think.




- I remember that newer version of openstackid needs to be tested with
translate-dev.
So I have just uploaded this patch:
https://review.openstack.org/#/c/419667/ .
I18n team needs more tests, but I think it is a good time to change
to openstackid-dev for such testing.
Please ping me after openstackid-dev test with translate-dev is
completed with no error :)

- On last I18n team meeting, I18n team recognized that the backup would
be so important.
Is there more disks for such backup on translate-dev and
translate.o.o server?
And the approach implementing like [3] looks quite a good idea I
think. Thanks, Frank!

I have commented on this change. I would prefer we not backup the dev
server and only backup the production server. We don't treat our dev
servers as permanent or reliable installs as they may need to be
reloaded for various reasons to aid development and testing. Also we
should set up backups similar to how we do the other services (I left a
comment on this change with a link to an example).
Thanks for kind comments, Clark! I have just read and I agree with your 
comment.





- Can I have root access to translate-dev and translate server?

This is something that can be discussed with the infra team, typically
we would give access to someone assisting with implementation to the
-dev server while keeping the production server as infra-root only. I
will make sure fungi sees this.
I agree with your thoughts and also fungi's comments mentioned in 
another thread :)


One additional thought I just want add for the consideration of this is 
that at least it would be much better
if at least one active I18n team member who assists the implementation, 
on-going upgrade and operational issues
(e.g., Zanata login is not working)  has the access to dev and 
production servers

with providing good insights on Zanata infra side with I18n team members.
Also, I expect more aligned communication between I18n team and Infra team.
To accomplish this, I would at least one I18n team member will have good 
insight on such as
understanding how Zanata deployment has implemented previously and 
seeing & analyzing server logs.
This one I18n member to tho such role should not be me, but 
unfortunately, it is difficult for me to find another relevant I18n team 
member now.


Now I would like to think more on it and also discuss with I18n team 
members, and then ask again if my thoughts will come up

(Or I may need more discussion with infra team members during PTG).


With many thanks,

/Ian



Hope this helps,
Clark




___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Nodepool v3 shim for talking to v2 zuul

2017-01-13 Thread Clint Byrum
Excerpts from Monty Taylor's message of 2017-01-13 10:18:41 -0600:
> Hey all,
> 
> Part of the rollout plan for zuul v3 involves rolling out the new
> zookpeer-based nodepool launchers before we roll out zuul v3 itself.
> We've mostly spoken about this in a hand-wavy mannr so far, but I think
> we may have a fairly simple answer of how to approach it - so I'd like
> to propose the following:
> 
> * Make a branch of current nodepool master that we don't intend to merge
> back ever.
> 
> * Replace the OpenStack interactions in the provider_manager with zk api
> calls.
> 
> * Make a copy of our nodepool.yaml file that has min-ready set to zero
> for everything.
> 
> * Run a copy of nodepool from the branch pointed at the new v3
> zookeeper-based nodepool.
> 
> This should allow the shim nodepool to make real-time requests for nodes
> of the new nodepool and attach them to the 2.5 ansiblelaunchers. It
> makes the v3 nodepool the system of record. Once that's all in place, we
> should be able to also roll out a zuul v3 installation that is also
> pointed at the new nodepool, and have both shim-nodepool and zuul v3 be
> clients of nodepool v3.
> 
> Once we've finished migrating to zuul v3, we'll just delete the shim server.
> 
> Sound good to everyone?

Are the jobs so complicated that you can't just write a dedicated gearman
worker to do the translation to/from zk? I mean, gearman is really really
really simple for a reason.

Just saying.. might be simpler to write a throw-away 500 line python gear
worker, than modifying nodepool. And you know how against rewrites I am,
but in this case.. it's not a rewrite, but a shim.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [I18n] Regarding Zanata upgrade plan to 3.9.6 with Xenial: Help is needed

2017-01-13 Thread Jeremy Stanley
On 2017-01-13 13:31:54 -0800 (-0800), Clark Boylan wrote:
> On Thu, Jan 12, 2017, at 02:36 PM, Ian Y. Choi wrote:
[...]
> > - Can I have root access to translate-dev and translate server?
> 
> This is something that can be discussed with the infra team, typically
> we would give access to someone assisting with implementation to the
> -dev server while keeping the production server as infra-root only. I
> will make sure fungi sees this.

Echoing Clark, as I really have nothing more to add... Ideally the
dev server should be identical enough to production under normal
circumstances that root access to dev is sufficient to test theories
and confirm issues. If you need logs or other similar artifacts from
the production instance, we have a dozen root admins scattered
around the globe who should be available to get those for you on
demand. If this arrangement is still inconvenient, then we can work
to improve dev/production symmetry or safely increase debugging data
availability for production as needed.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [I18n] Regarding Zanata upgrade plan to 3.9.6 with Xenial: Help is needed

2017-01-13 Thread Clark Boylan
On Thu, Jan 12, 2017, at 02:36 PM, Ian Y. Choi wrote:
> - I18n team completed tests with current Zanata (3.7.3) with Xenial [1], 
> but found one error
>mentioned in [2]. It seems that more memory size allocation for 
> Zanata with Java 8 is needed.
>Could you upgrade the memory size for translate-dev server first to 
> test again?

How much memory do we think will be necessary? The current instance is
an 8GB RAM instance. As long as we have quota for it, there should not
be a problem redeploying on a bigger instance.

> - I remember that newer version of openstackid needs to be tested with 
> translate-dev.
>So I have just uploaded this patch: 
> https://review.openstack.org/#/c/419667/ .
>I18n team needs more tests, but I think it is a good time to change 
> to openstackid-dev for such testing.
>Please ping me after openstackid-dev test with translate-dev is 
> completed with no error :)
> 
> - On last I18n team meeting, I18n team recognized that the backup would 
> be so important.
>Is there more disks for such backup on translate-dev and 
> translate.o.o server?
>And the approach implementing like [3] looks quite a good idea I 
> think. Thanks, Frank!

I have commented on this change. I would prefer we not backup the dev
server and only backup the production server. We don't treat our dev
servers as permanent or reliable installs as they may need to be
reloaded for various reasons to aid development and testing. Also we
should set up backups similar to how we do the other services (I left a
comment on this change with a link to an example).

> - Can I have root access to translate-dev and translate server?

This is something that can be discussed with the infra team, typically
we would give access to someone assisting with implementation to the
-dev server while keeping the production server as infra-root only. I
will make sure fungi sees this.

Hope this helps,
Clark


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Nodepool v3 shim for talking to v2 zuul

2017-01-13 Thread Monty Taylor
On 01/13/2017 11:32 AM, Clark Boylan wrote:
> On Fri, Jan 13, 2017, at 08:18 AM, Monty Taylor wrote:
>> Hey all,
>>
>> Part of the rollout plan for zuul v3 involves rolling out the new
>> zookpeer-based nodepool launchers before we roll out zuul v3 itself.
>> We've mostly spoken about this in a hand-wavy mannr so far, but I think
>> we may have a fairly simple answer of how to approach it - so I'd like
>> to propose the following:
>>
>> * Make a branch of current nodepool master that we don't intend to merge
>> back ever.
>>
>> * Replace the OpenStack interactions in the provider_manager with zk api
>> calls.
>>
>> * Make a copy of our nodepool.yaml file that has min-ready set to zero
>> for everything.
>>
>> * Run a copy of nodepool from the branch pointed at the new v3
>> zookeeper-based nodepool.
>>
>> This should allow the shim nodepool to make real-time requests for nodes
>> of the new nodepool and attach them to the 2.5 ansiblelaunchers. It
>> makes the v3 nodepool the system of record. Once that's all in place, we
>> should be able to also roll out a zuul v3 installation that is also
>> pointed at the new nodepool, and have both shim-nodepool and zuul v3 be
>> clients of nodepool v3.
>>
>> Once we've finished migrating to zuul v3, we'll just delete the shim
>> server.
>>
>> Sound good to everyone?
> 
> I'll admit I had to read this email several times before I understood
> what you are planning to do. To summarize in case it helps anyone else
> the idea is to have a throwaway version of nodepool that sits between
> zuul and the new zookeeper based nodepool launchers. This throwaway
> nodepool will read in demand from gearman and instead of running
> OpenStack api calls to clouds to fulfill that demand will make launch
> requests of the new zookeeper based nodepool.

Yes - that's a great summary.

> Just to throw the idea out there would an alternative daemon that
> imported nodepool's demand and allocation calculator and zuul's
> zookeeper request abstraction (however that turns out) make sense? This
> way you don't have to map nodepool requests through an abstraction for
> the various OpenStack APIs. And you'd be running the actual code that
> zuul will be using. (Not sure which would be simpler/quicker to get
> working though).

That was actually my original plan - but the demand and allocation stuff
is all implemented, and the "get a node" part of the provider manager
API is actually a really small surface area, so my hypothesis is that
just replacing the node launching api parts with the zk is less work.
However, if starting to implement it winds up getting hairy, I think
pivoting to alternative daemon is certainly the other sane choice.

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] docs.o.o looks great - doesn't it?

2017-01-13 Thread Andreas Jaeger
AFAIK everything looks fine with the new docs.o.o site, the server
handle the workload and the few missing documents have been manually
copied over.

Thanks for everybody involved with this - thanks to the OpenStack infra
team for taking over docs.o.o and developer.o.o! Jim did the development
part, and others helped with the manual steps and reviewed. Thanks
especially to Jim, Jeremy, Paul, Ian, Yolanda, Anne!

Next steps are stopping publishing of content to the old Cloud Sites
server [1], marking the spec as implemented [2], and getting https on
the two sites (Jeremy will do that soon).

We decided in the last infra meeting to evaluate the status again next
Tuesday and I'll remove the WIP from [1] after the meeting.

Still, if you notice any problems, please speak up,

Andreas

[1] https://review.openstack.org/420135
[2] https://review.openstack.org/420138
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] [Fuel Plugin] Please add me to groups.

2017-01-13 Thread Clark Boylan
On Fri, Jan 13, 2017, at 12:24 AM, Andrey Nikitin wrote:
> Hello!
> 
> A few days ago I created the following request to create one more
> repository to store a Fuel Plugin code -
> https://review.openstack.org/#/c/413656/.
> As far I can see the request is merged and the project is created. Could
> you please add me to the following groups to add other members there:
> - https://review.openstack.org/#/admin/groups/1687,members
> - https://review.openstack.org/#/admin/groups/1688,members ?

All done.

Clark

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Nodepool v3 shim for talking to v2 zuul

2017-01-13 Thread Clark Boylan
On Fri, Jan 13, 2017, at 08:18 AM, Monty Taylor wrote:
> Hey all,
> 
> Part of the rollout plan for zuul v3 involves rolling out the new
> zookpeer-based nodepool launchers before we roll out zuul v3 itself.
> We've mostly spoken about this in a hand-wavy mannr so far, but I think
> we may have a fairly simple answer of how to approach it - so I'd like
> to propose the following:
> 
> * Make a branch of current nodepool master that we don't intend to merge
> back ever.
> 
> * Replace the OpenStack interactions in the provider_manager with zk api
> calls.
> 
> * Make a copy of our nodepool.yaml file that has min-ready set to zero
> for everything.
> 
> * Run a copy of nodepool from the branch pointed at the new v3
> zookeeper-based nodepool.
> 
> This should allow the shim nodepool to make real-time requests for nodes
> of the new nodepool and attach them to the 2.5 ansiblelaunchers. It
> makes the v3 nodepool the system of record. Once that's all in place, we
> should be able to also roll out a zuul v3 installation that is also
> pointed at the new nodepool, and have both shim-nodepool and zuul v3 be
> clients of nodepool v3.
> 
> Once we've finished migrating to zuul v3, we'll just delete the shim
> server.
> 
> Sound good to everyone?

I'll admit I had to read this email several times before I understood
what you are planning to do. To summarize in case it helps anyone else
the idea is to have a throwaway version of nodepool that sits between
zuul and the new zookeeper based nodepool launchers. This throwaway
nodepool will read in demand from gearman and instead of running
OpenStack api calls to clouds to fulfill that demand will make launch
requests of the new zookeeper based nodepool.

Just to throw the idea out there would an alternative daemon that
imported nodepool's demand and allocation calculator and zuul's
zookeeper request abstraction (however that turns out) make sense? This
way you don't have to map nodepool requests through an abstraction for
the various OpenStack APIs. And you'd be running the actual code that
zuul will be using. (Not sure which would be simpler/quicker to get
working though).

Clark



___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-01-13 Thread Doug Hellmann
Excerpts from Takashi Yamamoto's message of 2017-01-13 12:43:54 +0900:
> hi,
> 
> On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  wrote:
> > Hi
> >
> > As of today, the project neutron-vpnaas is no longer part of the neutron
> > governance. This was a decision reached after the project saw a dramatic
> > drop in active development over a prolonged period of time.
> >
> > What does this mean in practice?
> >
> > From a visibility point of view, release notes and documentation will no
> > longer appear on openstack.org as of Ocata going forward.
> > No more releases will be published by the neutron release team.
> > The neutron team will stop proposing fixes for the upstream CI, if not
> > solely on a voluntary basis (e.g. I still felt like proposing [2]).
> >
> > How does it affect you, the user or the deployer?
> >
> > You can continue to use vpnaas and its CLI via the python-neutronclient and
> > expect it to work with neutron up until the newton
> > release/python-neutronclient 6.0.0. After this point, if you want a release
> > that works for Ocata or newer, you need to proactively request a release
> > [5], and reach out to a member of the neutron release team [3] for approval.
> 
> i want to make an ocata release. (and more importantly the stable branch,
> for the benefit of consuming subprojects)
> for the purpose, the next step would be ocata-3, right?

I'll be happy to help you get the tags and branches set up this cycle,
to give you time to either learn to do it next cycle or to apply for
official status so the release team can manage it.

To work out the details, we can either talk in #openstack-release
or you can email me to let me know what SHA you want to use for the
third milestone tag for Ocata.

Doug

> 
> > Assuming that the vpnaas CI is green, you can expect to have a working
> > vpnaas system upon release of its package in the foreseeable future.
> > Outstanding bugs and new bug reports will be rejected on the basis of lack
> > of engineering resources interested in helping out in the typical OpenStack
> > review workflow.
> > Since we are freezing the development of the neutron CLI in favor of the
> > openstack unified client (OSC), the lack of a plan to make the VPN commands
> > available in the OSC CLI means that at some point in the future the neutron
> > client CLI support for vpnaas may be dropped (though I don't expect this to
> > happen any time soon).
> >
> > Can this be reversed?
> >
> > If you are interested in reversing this decision, now it is time to step up.
> > That said, we won't be reversing the decision for Ocata. There is quite a
> > curve to ramp up to make neutron-vpnaas worthy of being classified as a
> > neutron stadium project, and that means addressing all the gaps identified
> > in [6]. If you are interested, please reach out, and I will work with you to
> > add your account to [4], so that you can drive the neutron-vpnaas agenda
> > going forward.
> >
> > Please do not hesitate to reach out to ask questions and/or clarifications.
> >
> > Cheers,
> > Armando
> >
> > [1] https://review.openstack.org/#/c/392010/
> > [2] https://review.openstack.org/#/c/397924/
> > [3] https://review.openstack.org/#/admin/groups/150,members
> > [4] https://review.openstack.org/#/admin/groups/502,members
> > [5] https://github.com/openstack/releases
> > [6]
> > http://specs.openstack.org/openstack/neutron-specs/specs/stadium/ocata/neutron-vpnaas.html
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


[OpenStack-Infra] Nodepool v3 shim for talking to v2 zuul

2017-01-13 Thread Monty Taylor
Hey all,

Part of the rollout plan for zuul v3 involves rolling out the new
zookpeer-based nodepool launchers before we roll out zuul v3 itself.
We've mostly spoken about this in a hand-wavy mannr so far, but I think
we may have a fairly simple answer of how to approach it - so I'd like
to propose the following:

* Make a branch of current nodepool master that we don't intend to merge
back ever.

* Replace the OpenStack interactions in the provider_manager with zk api
calls.

* Make a copy of our nodepool.yaml file that has min-ready set to zero
for everything.

* Run a copy of nodepool from the branch pointed at the new v3
zookeeper-based nodepool.

This should allow the shim nodepool to make real-time requests for nodes
of the new nodepool and attach them to the 2.5 ansiblelaunchers. It
makes the v3 nodepool the system of record. Once that's all in place, we
should be able to also roll out a zuul v3 installation that is also
pointed at the new nodepool, and have both shim-nodepool and zuul v3 be
clients of nodepool v3.

Once we've finished migrating to zuul v3, we'll just delete the shim server.

Sound good to everyone?

Monty

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] [openstack-dev] [neutron] [vpnaas] vpnaas no longer part of the neutron governance

2017-01-13 Thread Dariusz Śmigiel
2017-01-12 21:43 GMT-06:00 Takashi Yamamoto :
> hi,
>
> On Wed, Nov 16, 2016 at 11:02 AM, Armando M.  wrote:
>> Hi
>>
>> As of today, the project neutron-vpnaas is no longer part of the neutron
>> governance. This was a decision reached after the project saw a dramatic
>> drop in active development over a prolonged period of time.
>>
>> What does this mean in practice?
>>
>> From a visibility point of view, release notes and documentation will no
>> longer appear on openstack.org as of Ocata going forward.
>> No more releases will be published by the neutron release team.
>> The neutron team will stop proposing fixes for the upstream CI, if not
>> solely on a voluntary basis (e.g. I still felt like proposing [2]).
>>
>> How does it affect you, the user or the deployer?
>>
>> You can continue to use vpnaas and its CLI via the python-neutronclient and
>> expect it to work with neutron up until the newton
>> release/python-neutronclient 6.0.0. After this point, if you want a release
>> that works for Ocata or newer, you need to proactively request a release
>> [5], and reach out to a member of the neutron release team [3] for approval.
>
> i want to make an ocata release. (and more importantly the stable branch,
> for the benefit of consuming subprojects)
> for the purpose, the next step would be ocata-3, right?

Hey Takashi,
If you want to release new version of neutron-vpnaas, please look at [1].
This is the place, which you need to update and based on provided
details, tags and branches will be cut.

[1] 
https://github.com/openstack/releases/blob/master/deliverables/ocata/neutron-vpnaas.yaml

BR, Dariusz

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ask.o.o down

2017-01-13 Thread Jeremy Stanley
On 2017-01-13 10:33:24 + (+), Marton Kiss wrote:
> You can find more details about the host here:
> http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=156
> It had a network outage somewhere, if you check the eth0, the
> traffic was zero.

Unfortunately I find no corresponding outage details listed at
https://status.rackspace.com/ nor any support tickets for the tenant
providing the instance for that service. The timeframe is
suspiciously right around when daily cron jobs would be running
(they start at 06:25 UTC) but I don't see anything in the system
logs that would indicate we ran anything that would paralyze the
system like that.
-- 
Jeremy Stanley

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [OpenStack-Infra] Ask.o.o down

2017-01-13 Thread Marton Kiss
Tom,

You can find more details about the host here:
http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=156
It
had a network outage somewhere, if you check the eth0, the traffic was zero.

Marton

On Fri, Jan 13, 2017 at 8:05 AM Tom Fifield  wrote:

> ... and at 0700Z it is back ...
>
> On 13/01/17 14:41, Tom Fifield wrote:
> > As at 2017-01-13 0641Z, Ask.o.o is refusing connections (at least on
> > IPv4) - tried from a couple of different hosts
> >
> > fifieldt@docwork2:~$ wget ask.openstack.org
> > --2017-01-13 06:40:49--  http://ask.openstack.org/
> > Resolving ask.openstack.org (ask.openstack.org)... 23.253.72.95,
> > 2001:4800:7815:103:be76:4eff:fe05:89f3
> > Connecting to ask.openstack.org (ask.openstack.org)|23.253.72.95|:80...
> > failed: Connection refused.
> >
> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] [Fuel Plugin] Please add me to groups.

2017-01-13 Thread Andrey Nikitin
Hello!

A few days ago I created the following request to create one more
repository to store a Fuel Plugin code -
https://review.openstack.org/#/c/413656/.
As far I can see the request is merged and the project is created. Could
you please add me to the following groups to add other members there:
- https://review.openstack.org/#/admin/groups/1687,members
- https://review.openstack.org/#/admin/groups/1688,members ?


-- 
Andrey Nikitin
aniki...@mirantis.com




signature.asc
Description: OpenPGP digital signature
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra