Re: [openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Magnum queens, uses kubernetes 1.9.3 by default.
You can upgrade to v1.10.11-1. From a quick test
v1.11.5-1 is also compatible with 1.9.x.

We are working to make this painless, sorry you
have to ssh to the nodes for now.

Cheers,
Spyros

On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis  wrote:

> Hello all,
>
> Following the vulnerability [0], with magnum rocky and the kubernetes
> driver
> on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
> upgrade
> the apiserver in existing clusters, on the master node(s) you can run:
> sudo atomic pull --storage ostree
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
> sudo atomic containers update --rebase
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver
>
> You can upgrade the other k8s components with similar commands.
>
> I'll share instructions for magnum queens tomorrow morning CET time.
>
> Cheers,
> Spyros
>
> [0] https://github.com/kubernetes/kubernetes/issues/71411
> [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Hello all,

Following the vulnerability [0], with magnum rocky and the kubernetes driver
on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
upgrade
the apiserver in existing clusters, on the master node(s) you can run:
sudo atomic pull --storage ostree
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
sudo atomic containers update --rebase
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver

You can upgrade the other k8s components with similar commands.

I'll share instructions for magnum queens tomorrow morning CET time.

Cheers,
Spyros

[0] https://github.com/kubernetes/kubernetes/issues/71411
[1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] [magnum] Upcoming meeting 2018-09-11 Tuesday UTC 2100

2018-09-11 Thread Spyros Trigazis
Hello team,

This is a reminder for the upcoming magnum meeting [0].

For convenience you can import this from here [1] or view it in html here
[2].

Cheers,
Spyros

[0]
https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Magnum_Team_Meeting
[1]
https://calendar.google.com/calendar/ical/dl8ufmpm2ahi084d038o7rgoek%40group.calendar.google.com/public/basic.ics
[2]
https://calendar.google.com/calendar/embed?src=dl8ufmpm2ahi084d038o7rgoek%40group.calendar.google.com&ctz=Europe/Zurich
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][python-magnumclient] Magnumclient FFE

2018-08-06 Thread Spyros Trigazis
It is constraints only. There is no project
that requires the new version.

Spyros

On Mon, 6 Aug 2018, 19:36 Matthew Thode,  wrote:

> On 18-08-06 18:34:42, Spyros Trigazis wrote:
> > Hello,
> >
> > I have requested a release for python-magnumclient [0].
> > Per Doug Hellmann's comment in [0], I am requesting a FFE for
> > python-magnumclient.
> >
>
> My question to you is if this needs to be a constraints only thing or if
> there is some project that REQUIRES this new version to work (in which
> case that project needs to update it's exclusions or minumum).
>
> --
> Matthew Thode (prometheanfire)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releease][ptl] Missing and forced releases

2018-08-06 Thread Spyros Trigazis
Hello,

I have requested a release for python-magnumclient [0].
Per Doug Hellmann's comment in [0], I am requesting a FFE for
python-magnumclient.

Apologies for the inconvenience,
Spyros

[0] https://review.openstack.org/#/c/589138/


On Fri, 3 Aug 2018 at 18:52, Sean McGinnis  wrote:

> Today the release team reviewed the rocky deliverables and their releases
> done
> so far this cycle. There are a few areas of concern right now.
>
> Unreleased cycle-with-intermediary
> ==
> There is a much longer list than we would like to see of
> cycle-with-intermediary deliverables that have not done any releases so
> far in
> Rocky. These deliverables should not wait until the very end of the cycle
> to
> release so that pending changes can be made available earlier and there
> are no
> last minute surprises.
>
> For owners of cycle-with-intermediary deliverables, please take a look at
> what
> you have merged that has not been released and consider doing a release
> ASAP.
> We are not far from the final deadline for these projects, but it would
> still
> be good to do a release ahead of that to be safe.
>
> Deliverables that miss the final deadline will be at risk of being dropped
> from
> the Rocky coordinated release.
>
> Unrelease client libraries
> ==
> The following client libraries have not done a release:
>
> python-cloudkittyclient
> python-designateclient
> python-karborclient
> python-magnumclient
> python-searchlightclient*
> python-senlinclient
> python-tricircleclient
>
> The deadline for client library releases was last Thursday, July 26. This
> coming Monday the release team will force a release on HEAD for these
> clients.
>

The release I proposed in [0] is the current HEAD of the master branch.


>
> * python-searchlight client is currently planned on being dropped due to
>   searchlight itself not having met the minimum of two milestone releases
>   during the rocky cycle.
>
> Missing milestone 3
> ===
> The following projects missed tagging a milestone 3 release:
>
> cinder
> designate
> freezer
> mistral
> searchlight
>
> Following policy, a milestone 3 tag will be forced on HEAD for these
> deliverables on Monday.
>
> Freezer and searchlight missed previous milestone deadlines and will be
> dropped
> from the Rocky coordinated release.
>
> If there are any questions or concerns, please respond here or get ahold of
> someone from the release management team in the #openstack-release channel.
>
> --
> Sean McGinnis (smcginnis)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL Candidacy for Stein

2018-07-27 Thread Spyros Trigazis
Hello OpenStack community!

I would like to nominate myself as PTL for the Magnum project for the
Stein cycle.

In the last cycle magnum became more stable and is reaching the point
of becoming a feature complete solution for providing managed container
clusters for private or public OpenStack clouds. Also during this cycle
the community around the project became healthy and more sustainable.

My goals for Stein are to:
- complete the work in cluster upgrades and cluster healing
- keep up with the latest release of Kubernetes and Docker in stable
  branches and improve their release process
- documenation for cloud operators improvements
- continue on building the community which supports the project

Thanks for your time,
Spyros

strigazi on Freenode

[0] https://review.openstack.org/#/c/586516/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-07-24 Thread Spyros Trigazis
Hello list,

After trial and error this is the new layout of the magnum meetings plus
office hours.

1. The meeting moves to Tuesdays 2100 UTC starting today
2.1 Office hours for strigazi Tuesdays: 1300 to 1400 UTC
2.2 Office hours for flwang Wednesdays : 2200 to 2300 UTC

Cheers,
Spyros

[0] https://wiki.openstack.org/wiki/Meetings/Containers


On Tue, 26 Jun 2018 at 04:46, Fei Long Wang  wrote:

> Hi Spyros,
>
> Thanks for posting the discussion output. I'm not sure I can follow the
> idea of simplifying CNI configuration. Though we have both calico and
> flannel for k8s, but if we put both of them into single one config script.
> The script could be very complex. That's why I think we should define some
> naming and logging rules/policies for those scripts for long term
> maintenance to make our life easier. Thoughts?
>
> On 25/06/18 19:20, Spyros Trigazis wrote:
>
> Hello again,
>
> After Thursday's meeting I want to summarize what we discussed and add
> some pointers.
>
>
>- Work on using the out-of-tree cloud provider and move to the new
>model of defining it
>https://storyboard.openstack.org/#!/story/1762743
>https://review.openstack.org/#/c/577477/
>- Configure kubelet and kube-proxy on master nodes
>This story of the master node label can be extened
>https://storyboard.openstack.org/#!/story/2002618
>or we can add a new one
>- Simplify CNI configuration, we have calico and flannel. Ideally we
>should a single config script for each
>one. We could move flannel to the kubernetes hosted version that uses
>kubernetes objects for storage.
>(it is the recommended way by flannel and how it is done with kubeadm)
>- magum support in gophercloud
>https://github.com/gophercloud/gophercloud/issues/1003
>- *needs discussion *update version of heat templates (pike or queens)
>This need its own tread
>- Post deployment scripts for clusters, I have this since some time
>for my but doing it in
>heat is slightly (not a lot) complicated. Most magnum users favor  the
>simpler solution
>of passing a url of a manifest or script to the cluster (at least
>let's add sha512sum).
>- Simplify addition of custom labels/parameters. To avoid patcing
>magnum, it would be
>more ops friendly to have a generic field of custom parameters
>
> Not discussed in the last meeting but we should in the next ones:
>
>- Allow cluster scaling from different users in the same project
>https://storyboard.openstack.org/#!/story/2002648
>- Add the option to remove node from a resource group for swarm
>clusters like
>in kubernetes
>https://storyboard.openstack.org/#!/story/2002677
>
> Let's follow these up in the coming meetings, Tuesday 1000UTC and Thursday
> 1700UTC.
>
> You can always consult this page [1] for future meetings.
>
> Cheers,
> Spyros
>
> [1] https://wiki.openstack.org/wiki/Meetings/Containers
>
> On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis  wrote:
>
>> Hello list,
>>
>> We are going to have a second weekly meeting for magnum for 3 weeks
>> as a test to reach out to contributors in the Americas.
>>
>> You can join us tomorrow (or today for some?) at 1700UTC in
>> #openstack-containers .
>>
>> Cheers,
>> Spyros
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Cheers & Best regards,
> Feilong Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-16 Thread Spyros Trigazis
Hello list,

I'm excited to nominate Feilong as Core Reviewer for the Magnum project.

Feilong has contributed many features like Calico as an alternative CNI for
kubernetes, make coredns scale proportionally to the cluster, improved
admin operations on clusters and improved multi-master deployments. Apart
from contributing to the project he has been contributing to other projects
like gophercloud and shade, he has been very helpful with code reviews
and he tests and reviews all patches that are coming in. Finally, he is very
responsive on IRC and in the ML.

Thanks for all your contributions Feilong, I'm looking forward to working
with
you more!

Cheers,
Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-06-25 Thread Spyros Trigazis
Hello again,

After Thursday's meeting I want to summarize what we discussed and add some
pointers.


   - Work on using the out-of-tree cloud provider and move to the new model
   of defining it
   https://storyboard.openstack.org/#!/story/1762743
   https://review.openstack.org/#/c/577477/
   - Configure kubelet and kube-proxy on master nodes
   This story of the master node label can be extened
   https://storyboard.openstack.org/#!/story/2002618
   or we can add a new one
   - Simplify CNI configuration, we have calico and flannel. Ideally we
   should a single config script for each
   one. We could move flannel to the kubernetes hosted version that uses
   kubernetes objects for storage.
   (it is the recommended way by flannel and how it is done with kubeadm)
   - magum support in gophercloud
   https://github.com/gophercloud/gophercloud/issues/1003
   - *needs discussion *update version of heat templates (pike or queens)
   This need its own tread
   - Post deployment scripts for clusters, I have this since some time for
   my but doing it in
   heat is slightly (not a lot) complicated. Most magnum users favor  the
   simpler solution
   of passing a url of a manifest or script to the cluster (at least let's
   add sha512sum).
   - Simplify addition of custom labels/parameters. To avoid patcing
   magnum, it would be
   more ops friendly to have a generic field of custom parameters

Not discussed in the last meeting but we should in the next ones:

   - Allow cluster scaling from different users in the same project
   https://storyboard.openstack.org/#!/story/2002648
   - Add the option to remove node from a resource group for swarm clusters
   like
   in kubernetes
   https://storyboard.openstack.org/#!/story/2002677

Let's follow these up in the coming meetings, Tuesday 1000UTC and Thursday
1700UTC.

You can always consult this page [1] for future meetings.

Cheers,
Spyros

[1] https://wiki.openstack.org/wiki/Meetings/Containers

On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis  wrote:

> Hello list,
>
> We are going to have a second weekly meeting for magnum for 3 weeks
> as a test to reach out to contributors in the Americas.
>
> You can join us tomorrow (or today for some?) at 1700UTC in
> #openstack-containers .
>
> Cheers,
> Spyros
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-06-20 Thread Spyros Trigazis
Hello list,

We are going to have a second weekly meeting for magnum for 3 weeks
as a test to reach out to contributors in the Americas.

You can join us tomorrow (or today for some?) at 1700UTC in
#openstack-containers .

Cheers,
Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators][heat][oslo.db][magnum] Configure maximum number of db connections

2018-06-19 Thread Spyros Trigazis
Hello lists,

With heat's team help I figured it out. Thanks Jay for looking into it.

The issue is coming from [1], where the max_overflow is set to
 executor_thread_pool_size if it is set to a lower value to address
another issue. In my case, I had a lot of RAM and CPU so I could
push for threads but I was "short" in db connections. The formula to
calculate the number of connections can be like this:
num_heat_hosts=4
heat_api_workers=2
heat_api_cfn_workers=2
num_engine_workers=4
executor_thread_pool_size = 22
max_pool_size=4
max_overflow=executor_thread_pool_size
num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers +
num_engine_workers + heat_api_cfn_workers)
832

And  a note for magnum deployments medium to large, see the options
we have changed in heat conf and change according to your needs.
The db configuration described here and changes we discovered in a
previous scale test can help to have a stable magnum and heat service.

For large stacks or projects with many stacks you need to change
the following in these values or better, according to your needs.

[Default]
executor_thread_pool_size = 22
max_resources_per_stack = -1
max_stacks_per_tenant = 1
action_retry_limit = 10
client_retry_limit = 10
engine_life_check_timeout = 600
max_template_size = 5242880
rpc_poll_timeout = 600
rpc_response_timeout = 600
num_engine_workers = 4

[database]
max_pool_size = 4
max_overflow = 22
Cheers,
Spyros

[heat_api]

workers = 2

[heat_api_cfn]
workers = 2

Cheers,
Spyros

ps We will update the magnum docs as well

[1]
http://git.openstack.org/cgit/openstack/heat/tree/heat/engine/service.py#n375


On Mon, 18 Jun 2018 at 19:39, Jay Pipes  wrote:

> +openstack-dev since I believe this is an issue with the Heat source code.
>
> On 06/18/2018 11:19 AM, Spyros Trigazis wrote:
> > Hello list,
> >
> > I'm hitting quite easily this [1] exception with heat. The db server is
> > configured to have 1000
> > max_connnections and 1000 max_user_connections and in the database
> > section of heat
> > conf I have these values set:
> > max_pool_size = 22
> > max_overflow = 0
> > Full config attached.
> >
> > I ended up with this configuration based on this formula:
> > num_heat_hosts=4
> > heat_api_workers=2
> > heat_api_cfn_workers=2
> > num_engine_workers=4
> > max_pool_size=22
> > max_overflow=0
> > num_heat_hosts * (max_pool_size + max_overflow) * (heat_api_workers +
> > num_engine_workers + heat_api_cfn_workers)
> > 704
> >
> > What I have noticed is that the number of connections I expected with
> > the above formula is not respected.
> > Based on this formula each node (every node runs the heat-api,
> > heat-api-cfn and heat-engine) should
> > use up to 176 connections but they even reach 400 connections.
> >
> > Has anyone noticed a similar behavior?
>
> Looking through the Heat code, I see that there are many methods in the
> /heat/db/sqlalchemy/api.py module that use a SQLAlchemy session but
> never actually call session.close() [1] which means that the session
> will not be released back to the connection pool, which might be the
> reason why connections keep piling up.
>
> Not sure if there's any setting in Heat that will fix this problem.
> Disabling connection pooling will likely not help since connections are
> not properly being closed and returned to the connection pool to begin
> with.
>
> Best,
> -jay
>
> [1] Heat apparently doesn't use the oslo.db enginefacade transaction
> context managers either, which would help with this problem since the
> transaction context manager would take responsibility for calling
> session.flush()/close() appropriately.
>
>
> https://github.com/openstack/oslo.db/blob/43af1cf08372006aa46d836ec45482dd4b5b5349/oslo_db/sqlalchemy/enginefacade.py#L626
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-04-23 Thread Spyros Trigazis
Hi Sergey,

In magnum queens we can set the private ca as a service account key.
Here [1] we can set the ca.key file. When the label cert_manager_api is
set to true.

Cheers,
Spyros

[1]
https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh#L32

On 20 April 2018 at 19:57, Sergey Filatov  wrote:

> Hello,
>
> I looked into k8s drivers for magnum I see that each api-server on master
> node generates it’s own service-account-key-file. This causes issues with
> service-accounts authenticating on api-server. (In case api-server endpoint
> moves).
> As far as I understand we should have either all api-server keys synced on
> api-servesr or pre-generate single api-server key.
>
> What is the way for magnum to get over this issue?
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-03-01 Thread Spyros Trigazis
Hello,

After discussion with the keystone team at the above session, keystone
will not provide a way to transfer trusts nor application credentials,
since it doesn't address the above problem (the member that leaves the team
can auth with keystone if he has the trust/app-creds).

In magnum we need a way for admins and the cluster owner to rotate the
trust or app-creds and certificates.

We can leverage the existing rotate_ca api for rotating the ca and at the
same
time the trust. Since this api is designed only to rotate the ca, we can
add a cluster action to transter ownership of the cluster. This action
should be
allowed to be executed by the admin or the current owner of a given cluster.

At the same time, the trust created by heat for every stack suffers from the
same problem, we should check with the heat team what is their plan.

Cheers,
Spyros

On 27 February 2018 at 20:53, Ricardo Rocha  wrote:

> Hi Lance.
>
> On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad 
> wrote:
> >
> >
> > On 02/26/2018 10:17 AM, Ricardo Rocha wrote:
> >> Hi.
> >>
> >> We have an issue on the way Magnum uses keystone trusts.
> >>
> >> Magnum clusters are created in a given project using HEAT, and require
> >> a trust token to communicate back with OpenStack services -  there is
> >> also integration with Kubernetes via a cloud provider.
> >>
> >> This trust belongs to a given user, not the project, so whenever we
> >> disable the user's account - for example when a user leaves the
> >> organization - the cluster becomes unhealthy as the trust is no longer
> >> valid. Given the token is available in the cluster nodes, accessible
> >> by users, a trust linked to a service account is also not a viable
> >> solution.
> >>
> >> Is there an existing alternative for this kind of use case? I guess
> >> what we might need is a trust that is linked to the project.
> > This was proposed in the original application credential specification
> > [0] [1]. The problem is that you're sharing an authentication mechanism
> > with multiple people when you associate it to the life cycle of a
> > project. When a user is deleted or removed from the project, nothing
> > would stop them from accessing OpenStack APIs if the application
> > credential or trust isn't rotated out. Even if the credential or trust
> > were scoped to the project's life cycle, it would need to be rotated out
> > and replaced when users come and go for the same reason. So it would
> > still be associated to the user life cycle, just indirectly. Otherwise
> > you're allowing unauthorized access to something that should be
> protected.
> >
> > If you're at the PTG - we will be having a session on application
> > credentials tomorrow (Tuesday) afternoon [2] in the identity-integration
> > room [3].
>
> Thanks for the reply, i now understand the issue.
>
> I'm not at the PTG. Had a look at the etherpad but it seems app
> credentials will have a similar lifecycle so not suitable for the use
> case above - for the same reasons you mention.
>
> I wonder what's the alternative to achieve what we need in Magnum?
>
> Cheers,
>   Ricardo
>
> > [0] https://review.openstack.org/#/c/450415/
> > [1] https://review.openstack.org/#/c/512505/
> > [2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg
> > [3] http://ptg.openstack.org/ptg.html
> >>
> >> I believe the same issue would be there using application credentials,
> >> as the ownership is similar.
> >>
> >> Cheers,
> >>   Ricardo
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] New meeting time Tue 1000UTC

2018-02-05 Thread Spyros Trigazis
Hello,

Heads up, the containers team meeting has changed from 1600UTC to 1000UTC.

See you there tomorrow at #openstack-meeting-alt !
Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Rocky Magnum PTL candidacy

2018-02-03 Thread Spyros Trigazis
Dear Stackers,

I would like to nominate myself as PTL for the Magnum project for the
Rocky cycle.

I have been consistently contributing to Magnum since February 2016 and
I am a core reviewer since August 2016. Since then, I have contributed
to significant features like cluster drivers, add Magnum tests to Rally
(I'm core reviewer to rally to help the rally team with Magnum related
reviews), wrote Magnum's installation tutorial and served as docs
liaison for the project. My latest contributions include the swarm-mode
driver, containerization of the heat-agent and the remaining kubernetes
components, fixed the long standing problem of adding custom CAs to the
clusters and brought the kubernetes driver up to date, with RBAC
configuration and the latest kubernetes dashboard. I have been the
release liaison for Magnum for Pike and served as PTL for the Queens
release. I have contributed a lot in Magnum's CI jobs (adding
multi-node, DIB and new driver jobs). I have been working closely with
other projects consumed by Magnum like Heat, Fedora Atomic, kubernetes
python client and kubernetes rpms. Despite the slow down on development
due shortage of contributions, we managed to keep the project up to date
and increase the user base.

For the next cycle, I want to enable the Magnum team to complete the
work on cluster upgrades, cluster federation, cluster auto-healing,
support for different container runtimes and container network backends.

Thanks for considering me,
Spyros Trigazis

[0]
https://git.openstack.org/cgit/openstack/election/tree/candidates/rocky/Magnum/strigazi.txt?id=7a31af003f1be68ee81229c8c828716838e5b8dd
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] fedora atomic image with kubernetes with a CRI = frakti or clear containers

2018-01-09 Thread Spyros Trigazis
Hi Greg,

You can try to build an image with this process [1]. I haven't used for
some time since
we rely on the upstream image.

Another option that I would like to investigate  is to build a system
container with
frakti or clear container similar to these container images [2] [3] [4].
Then you can install
that container on the atomic host.

We could discuss this during the magnum meeting today at 16h00 UTC in
#openstack-meeting-alt [5].

Cheers,
Spyros

[1]
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/image/fedora-atomic/README.rst
[2]
https://github.com/kubernetes-incubator/cri-o/tree/master/contrib/system_containers/fedora
[3]
https://github.com/projectatomic/atomic-system-containers/tree/master/docker-centos
[4]
https://gitlab.cern.ch/cloud/atomic-system-containers/tree/cern-qa/docker-centos
[5] https://wiki.openstack.org/wiki/Meetings/Containers

On 8 January 2018 at 16:42, Waines, Greg  wrote:

> Hey there,
>
>
>
> I am currently running magnum with the fedora-atomic image that is
> installed as part of the devstack installation of magnum.
>
> This fedora-atomic image has kubernetes with a CRI of the standard docker
> container.
>
>
>
> Where can i find (or how do i build) a fedora-atomic image with kubernetes
> and either frakti or clear containers (runV) as the CRI ?
>
>
>
> Greg.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-24 Thread Spyros Trigazis
Hi Sergio,

On 22 November 2017 at 20:37, Sergio Morales Acuña  wrote:
> Dear Spyros:
>
> Thanks for your answer. I'm moving my cloud to Pike!.
>
> The problems I encountered were with the TCP listeners for the etcd's
> LoadBalancer and the "curl -sf" from the nodes to the etcd LB (I have to put
> a -k).

[1] [2] the certs are passed to curl. Is there another issue and you need -k ?

[1] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/kubernetes/fragments/network-config-service.sh?h=stable%2Focata#n50
[2] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/network-config-service.sh?h=stable/ocata#n56

>
> I'm using Kolla Binary with Centos 7, so I also have problems with kubernets
> python libreries (they needed updates to be able to handle IPADDRESS on
> certificates)

I think this problem is fixed in ocata [3], what did you have to change?

[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/kubernetes/fragments/make-cert.sh?h=stable%2Focata

>
> Cheers and thanks again.

If you discover any bugs please report them and if you need anything free
to ask here or in #openstack-containers.

Cheers,
Spyros

>
>
> El mié., 22 nov. 2017 a las 5:30, Spyros Trigazis ()
> escribió:
>>
>> Hi Sergio,
>>
>> On 22 November 2017 at 03:31, Sergio Morales Acuña 
>> wrote:
>> > I'm using Openstack Ocata and trying Magnum.
>> >
>> > I encountered a lot of problems but I been able to solved many of them.
>>
>> Which problems did you encounter? Can you be more specific? Can we solve
>> them
>> for everyone else?
>>
>> >
>> > Now I'm curious about some aspects of Magnum:
>> >
>> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
>> > create a custom fedora-atomic-27? What about RBAC?
>>
>> Since Pike, magnum is running kubernetes in containers on fedora 26.
>> In fedora atomic 27 kubernetes etcd and flannel are removed from the
>> base image so running them in containers is the only way.
>>
>> For RBAC, you need 1.8 and with Pike you can get it. just by changing
>> one parameter.
>>
>> >
>> > ¿Any one here using Magnum on daily basis? If yes, What version are you
>> > using?
>>
>> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
>> running
>> Pike and we use only the fedora atomic drivers.
>>
>> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
>> Vexxhost is running magnum:
>> https://vexxhost.com/public-cloud/container-services/kubernetes/
>> Stackhpc:
>> https://www.stackhpc.com/baremetal-cloud-capacity.html
>>
>> >
>> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need
>> > to
>> > upgrade Magnum to follow K8S's crazy changes?
>>
>> Atomic is maintained and supported much more than CoreOS in magnum.
>> There wasn't much interest from developers for CoreOS.
>>
>> >
>> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>>
>> Magnum Ocata is not too old but it will eventually be since it misses the
>> capability of running kubernetes on containers. Pike allows this option
>> and can
>> keep up with kubernetes easily.
>>
>> >
>> > ¿Where I can found updated articles about the state of Magnum and it's
>> > future?
>>
>> I did the project update presentation for magnum at the Sydney summit.
>> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>>
>> Chees,
>> Spyros
>>
>> >
>> > Cheers
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Spyros Trigazis
I forgot to include the Pike release notes
https://docs.openstack.org/releasenotes/magnum/pike.html

Spyros

On 22 November 2017 at 09:29, Spyros Trigazis  wrote:
> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña  wrote:
>> I'm using Openstack Ocata and trying Magnum.
>>
>> I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve them
> for everyone else?
>
>>
>> Now I'm curious about some aspects of Magnum:
>>
>> ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
>> create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
>>
>> ¿Any one here using Magnum on daily basis? If yes, What version are you
>> using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are 
> running
> Pike and we use only the fedora atomic drivers.
> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
>>
>> ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
>> upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
>>
>> ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option and 
> can
> keep up with kubernetes easily.
>
>>
>> ¿Where I can found updated articles about the state of Magnum and it's
>> future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
>>
>> Cheers
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Spyros Trigazis
Hi Sergio,

On 22 November 2017 at 03:31, Sergio Morales Acuña  wrote:
> I'm using Openstack Ocata and trying Magnum.
>
> I encountered a lot of problems but I been able to solved many of them.

Which problems did you encounter? Can you be more specific? Can we solve them
for everyone else?

>
> Now I'm curious about some aspects of Magnum:
>
> ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> create a custom fedora-atomic-27? What about RBAC?

Since Pike, magnum is running kubernetes in containers on fedora 26.
In fedora atomic 27 kubernetes etcd and flannel are removed from the
base image so running them in containers is the only way.

For RBAC, you need 1.8 and with Pike you can get it. just by changing
one parameter.

>
> ¿Any one here using Magnum on daily basis? If yes, What version are you
> using?

In our private cloud at CERN we have ~120 clusters with ~450 vms, we are running
Pike and we use only the fedora atomic drivers.
http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
Vexxhost is running magnum:
https://vexxhost.com/public-cloud/container-services/kubernetes/
Stackhpc:
https://www.stackhpc.com/baremetal-cloud-capacity.html

>
> ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> upgrade Magnum to follow K8S's crazy changes?

Atomic is maintained and supported much more than CoreOS in magnum.
There wasn't much interest from developers for CoreOS.

>
> ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?

Magnum Ocata is not too old but it will eventually be since it misses the
capability of running kubernetes on containers. Pike allows this option and can
keep up with kubernetes easily.

>
> ¿Where I can found updated articles about the state of Magnum and it's
> future?

I did the project update presentation for magnum at the Sydney summit.
https://www.openstack.org/videos/sydney-2017/magnum-project-update

Chees,
Spyros

>
> Cheers
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-02 Thread Spyros Trigazis
Hi Vahric,

A very important reason that we use fedora atomic is that we
are no maintaining our special image. We use the upstream
operating system and we rely on the Fedora Project and we
contribute back to it. If we use ubuntu we would need to
maintain our special qcow image.

We also use the same containers as the Fedora Atomic project
so we have container images tested by more people.

CoreOS is kubernetes oriented, they updated Docker only
last week [1] from 1.12.6 to 17.09. You can contribute a coreos
swarm-mode driver if you want but you it will rely on CoreOS
to update the docker version.

Support for swarm-mode is only added in Pike. You can
follow what Ricardo proposed or as you said update all
your OpenStack services.

Cheers,
Spyros

[1] https://coreos.com/releases/

On 2 November 2017 at 09:34, Ricardo Rocha  wrote:
> Hi again.
>
> On Wed, Nov 1, 2017 at 9:47 PM, Vahric MUHTARYAN  wrote:
>> Hello Ricardo ,
>>
>> Thanks for your explanation and answers.
>> One more question, what is the possibility to keep using Newton (right now i 
>> have it) and use latest Magnum features like swarm mode without upgrade 
>> Openstack ? Does it possible ?
>
> I don't think this functionality is available in Magnum Newton.
>
> One option though is to upgrade only Magnum, there should be no
> dependency on more recent versions of other components - assuming you
> either have a separate control plane for Magnum or are able to split
> it.
>
> Cheers,
>   Ricardo
>
>>
>> Regards
>> VM
>>
>> On 30.10.2017 01:19, "Ricardo Rocha"  wrote:
>>
>> Hi Vahric.
>>
>> On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN  
>> wrote:
>> > Hello All ,
>> >
>> >
>> >
>> > I found some blueprint about supporting Docker Swarm Mode
>> > https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support
>> >
>> >
>> >
>> > I understood that related development is not over yet and no any 
>> Openstack
>> > version or Magnum version to test it also looks like some more thing 
>> to do.
>> >
>> > Could you pls inform when we should expect support of Docker Swarm 
>> Mode ?
>>
>> Swarm mode is already available in Pike:
>> https://docs.openstack.org/releasenotes/magnum/pike.html
>>
>> > Another question is fedora atomic is good but looks like its not 
>> up2date for
>> > docker , instead of use Fedora Atomic , why you do not use Ubuntu, or 
>> some
>> > other OS and directly install docker with requested version ?
>>
>> Atomic also has advantages (immutable, etc), it's working well for us
>> at CERN. There are also Suse and CoreOS drivers, but i'm not familiar
>> with those.
>>
>> Most pieces have moved to Atomic system containers, including all
>> kubernetes components so the versions are decouple from the Atomic
>> version.
>>
>> We've also deployed locally a patch running docker itself in a system
>> container, this will get upstream with:
>> https://bugs.launchpad.net/magnum/+bug/1727700
>>
>> With this we allow our users to deploy clusters with any docker
>> version (selectable with a label), currently up to 17.09.
>>
>> > And last, to help to over waiting items “Next working items: ”  how we 
>> could
>> > help ?
>>
>> I'll let Spyros reply to this and give you more info on the above items 
>> too.
>>
>> Regards,
>>   Ricardo
>>
>> >
>> >
>> >
>> > Regards
>> >
>> > Vahric Muhtaryan
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/o

Re: [openstack-dev] [ptls] Sydney Forum Project Onboarding Rooms

2017-10-10 Thread Spyros Trigazis
Magnum - Spyros Trigazis - 

Thanks!

On 9 October 2017 at 23:24, Kendall Nelson  wrote:
> Wanted to keep this thread towards the top of inboxes for those I haven't
> heard from yet.
>
> About a 1/4 of the way booked, so there are still slots available!
>
> -Kendall (diablo_rojo)
>
>
> On Thu, Oct 5, 2017 at 8:50 AM Kendall Nelson  wrote:
>>
>> Hello :)
>>
>> We have a little over 40 slots available so we should be able to
>> accommodate almost everyone, but it will be a first response first serve
>> basis.
>>
>> Logistics: Slots are 40 min long and will have projection set up in them.
>> The rooms have a capacity of about 40 people and will be set up classroom
>> style.
>>
>> If you are interested in reserving a spot, just reply directly to me and I
>> will put your project on the list. Please let me know if you want one and
>> also include the names and emails anyone that will be speaking with you.
>>
>> When slots run out, they run out.
>>
>> Thanks!
>>
>> -Kendall Nelson (diablo_rojo)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] issue with admin_osc.keystone().trustee_domain_id

2017-09-22 Thread Spyros Trigazis
Hi Greg,

Can you revisit your policy configuration and try again?

See here:
http://git.openstack.org/cgit/openstack/magnum/plain/etc/magnum/policy.json?h=stable/newton

Cheers,
Spyros


On 22 September 2017 at 13:49, Waines, Greg  wrote:
> Just another note on this ...
>
>
>
> We have
>
> · setup a ‘magnum’ domain, and
>
> · setup a ‘trustee_domain_admin’ user within that domain, and
>
> · gave that user and domain the admin role, and ß actually not
> 100% sure about this
>
> · referenced these items in magnum.conf
>
> oi.e. trustee_domain_name, trustee_domain_admin_name,
> trustee_domain_admin_password
>
>
>
> ... but still seeing the trust_domain_id issue in the admin context (see
> email below).
>
>
>
> let me know if anyone has some ideas on issue or next steps to look at,
>
> Greg.
>
>
>
>
>
> From: Greg Waines 
> Reply-To: "openstack-dev@lists.openstack.org"
> 
> Date: Wednesday, September 20, 2017 at 12:20 PM
> To: "openstack-dev@lists.openstack.org" 
> Cc: "Sun, Yicheng (Jerry)" 
> Subject: [openstack-dev] [magnum] issue with
> admin_osc.keystone().trustee_domain_id
>
>
>
> We are in the process of integrating MAGNUM into our OpenStack distribution.
>
> We are working with NEWTON version of MAGNUM.
>
> We have the MAGNUM processes up and running and configured.
>
>
>
> However we are seeing the following error (see stack trace below) on
> virtually all MAGNUM CLI calls.
>
>
>
> The code where the stack trace is triggered:
>
> def add_policy_attributes(target):
>
> """Adds extra information for policy enforcement to raw target object"""
>
> admin_context = context.make_admin_context()
>
> admin_osc = clients.OpenStackClients(admin_context)
>
> trustee_domain_id = admin_osc.keystone().trustee_domain_id
>
> target['trustee_domain_id'] = trustee_domain_id
>
> return target
>
>
>
> ( NOTE: that this code was introduced upstream as part of a fix for
> CVE-2016-7404:
>
> https://github.com/openstack/magnum/commit/2d4e617a529ea12ab5330f12631f44172a623a14
> )
>
>
>
> Stack Trace:
>
> File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in
> callfunction
>
> result = f(self, *args, **kwargs)
>
>
>
>   File "", line 2, in get_all
>
>
>
>   File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 130,
> in wrapper
>
> exc=exception.PolicyNotAuthorized, action=action)
>
>
>
>   File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 97,
> in enforce
>
> #add_policy_attributes(target)
>
>
>
>   File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 106,
> in add_policy_attributes
>
> trustee_domain_id = admin_osc.keystone().trustee_domain_id
>
>
>
>   File "/usr/lib/python2.7/site-packages/magnum/common/keystone.py", line
> 237, in trustee_domain_id
>
> self.domain_admin_session
>
>
>
>   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py",
> line 136, in get_access
>
> self.auth_ref = self.get_auth_ref(session)
>
>
>
>   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py",
> line 167, in get_auth_ref
>
> authenticated=False, log=False, **rkwargs)
>
>
>
>   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> 681, in post
>
> return self.request(url, 'POST', **kwargs)
>
>
>
>   File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101,
> in inner
>
> return wrapped(*args, **kwargs)
>
>
>
>   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> 570, in request
>
> raise exceptions.from_response(resp, method, url)
>
>
>
> NotFound: The resource could not be found. (HTTP 404)
>
>
>
>
>
> Any ideas on what our issue could be ?
>
> Or next steps to investigate ?
>
>
>
> thanks in advance,
>
> Greg.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Weekly meetings

2017-08-28 Thread Spyros Trigazis
Hello,

As discussed in last week's meeting [0], this week and next we will
discuss plans about Queens and review blueprints. So, if you want
to add discussion items please bring them up tomorrow or next week in
our weekly meeting. If for any reason, you can't attend you can start
a thread in the mailing list.

Also this week, we will go through our blueprint list and clean it up from
obsolete blueprints.

Finally, I would like to ask you to review this blueprint [1] about cluster
federation, add your ideas and comments in the review.

Cheers,
Spyros

[0] 
http://eavesdrop.openstack.org/meetings/containers/2017/containers.2017-08-22-16.00.html
[1] https://review.openstack.org/#/c/489609/

On 22 August 2017 at 17:47, Spyros Trigazis  wrote:
> Hello,
>
> Recently we decided to have bi-weekly meetings. Starting from today we will
> have weekly meetings again.
>
> From now on, we will have our meeting every Tuesday at 1600 UTC
> in #openstack-meeting-alt . For today, that is in 13 minutes.
>
> Cheers,
> Spyros

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Weekly meetings

2017-08-22 Thread Spyros Trigazis
Hello,

Recently we decided to have bi-weekly meetings. Starting from today we will
have weekly meetings again.

From now on, we will have our meeting every Tuesday at 1600 UTC
in #openstack-meeting-alt . For today, that is in 13 minutes.

Cheers,
Spyros

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL Candidacy for Queens

2017-08-04 Thread Spyros Trigazis
Hello!

I would like to nominate myself as PTL for the Magnum project for the
Queens cycle.

I have been consistently contributing to Magnum since February 2016
and I am a core reviewer since August 2016. Since then, I have
contributed to significant features like cluster drivers, add Magnum
tests to Rally (I'm core reviewer to rally to help the rally team with
Magnum related reviews), wrote Magnum's installation tutorial and
served as docs liaison for the project. My latest contribution is the
swarm-mode cluster driver. I have been the release liaison for Magnum
for Pike and I have contributed a lot in Magnum's CI jobs (adding
multi-node, DIB and new driver jobs, I haven't managed to add Magnum
in CentOS CI yet :( but we have granted access). Finally, I have been
working closely with other projects consumed by Magnum like Heat and
Fedora Atomic.

My plans for Queens are to contribute and guide other contributors to:
* Finalize and stabilize the very much wanted feature for cluster
  upgrades.
* Add functionality to heal clusters from a failed state.
* Add functionality for federated Kubernetes clusters and potentially
  other cluster types.
* Add Kuryr as a network driver.

Thanks for considering me,
Spyros Trigazis

[0] https://review.openstack.org/490893

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][glance][barbican][telemetry][keystone][designate][congress][magnum][searchlight][swift][tacker] unreleased libraries

2017-06-09 Thread Spyros Trigazis
Thanks for the reminder.

python-magnumclient https://review.openstack.org/#/c/472718/

Cheers,
Spyros

On 9 June 2017 at 16:39, Doug Hellmann  wrote:

> We have several teams with library deliverables that haven't seen
> any releases at all yet this cycle. Please review the list below,
> and if there are changes on master since the last release prepare
> a release request.  Remember that because of the way our CI system
> works, patches that land in libraries are not used in tests for
> services that use the libs unless the library has a release and the
> constraints list is updated.
>
> Doug
>
> glance-store
> instack
> pycadf
> python-barbicanclient
> python-ceilometerclient
> python-congressclient
> python-designateclient
> python-keystoneclient
> python-magnumclient
> python-searchlightclient
> python-swiftclient
> python-tackerclient
> requestsexceptions
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread Spyros Trigazis
On 30 May 2017 at 15:26, Hongbin Lu  wrote:

> Please consider leveraging Fuxi instead.
>

Is there a missing functionality from rexray?


> Kuryr/Fuxi team is working very hard to deliver the docker network/storage
> plugins. I wish you will work with us to get them integrated with
> Magnum-provisioned cluster.
>

Patches are welcome to support fuxi as an *option* instead of rexray, so
users can choose.


> Currently, COE clusters provisioned by Magnum is far away from
> enterprise-ready. I think the Magnum project will be better off if it can
> adopt Kuryr/Fuxi which will give you a better OpenStack integration.
>
>
>
> Best regards,
>
> Hongbin
>

fuxi feature request: Add authentication using a trustee and a trustID.

Cheers,
Spyros


>
>
> *From:* Spyros Trigazis [mailto:strig...@gmail.com]
> *Sent:* May-30-17 7:47 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for
> Fuxi-golang
>
>
>
> FYI, there is already a cinder volume driver for docker available, written
>
> in golang, from rexray [1].
>
>
> Our team recently contributed to libstorage [3], it could support manila
> too. Rexray
> also supports the popular cloud providers.
>
> Magnum's docker swarm cluster driver, already leverages rexray for cinder
> integration. [2]
>
> Cheers,
> Spyros
>
>
>
> [1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
>
> [2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
>
> [3] http://git.openstack.org/cgit/openstack/magnum/tree/magn
> um/drivers/common/templates/swarm/fragments/volume-
> service.sh?h=stable/ocata
>
>
>
> On 27 May 2017 at 12:15, zengchen  wrote:
>
> Hi John & Ben:
>
>  I have committed a patch[1] to add a new repository to Openstack. Please
> take a look at it. Thanks very much!
>
>
>
>  [1]: https://review.openstack.org/#/c/468635
>
>
>
> Best Wishes!
>
> zengchen
>
>
>
>
>
> 在 2017-05-26 21:30:48,"John Griffith"  写道:
>
>
>
>
>
> On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:
>
>
>
> Hi john:
>
> I have seen your updates on the bp. I agree with your plan on how to
> develop the codes.
>
> However, there is one issue I have to remind you that at present, Fuxi
> not only can convert
>
>  Cinder volume to Docker, but also Manila file. So, do you consider to
> involve Manila part of codes
>
>  in the new Fuxi-golang?
>
> Agreed, that's a really good and important point.  Yes, I believe Ben
> Swartzlander
>
>
>
> is interested, we can check with him and make sure but I certainly hope
> that Manila would be interested.
>
> Besides, IMO, It is better to create a repository for Fuxi-golang, because
>
>  Fuxi is the project of Openstack,
>
> Yeah, that seems fine; I just didn't know if there needed to be any more
> conversation with other folks on any of this before charing ahead on new
> repos etc.  Doesn't matter much to me though.
>
>
>
>
>
>Thanks very much!
>
>
>
> Best Wishes!
>
> zengchen
>
>
>
>
> At 2017-05-25 22:47:29, "John Griffith"  wrote:
>
>
>
>
>
> On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:
>
> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
> language.
> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>
>
>
> At 2017-05-25 19:46:54, "zengchen"  wrote:
>
> Hi guys:
>
> hongbin had committed a bp of rewriting Fuxi with go language[1]. My
> question is where to commit codes for it.
>
> We have two choice, 1. create a new repository, 2. create a new branch.
> IMO, the first one is much better. Because
>
> there are many differences in the layer of infrastructure, such as CI.
> What's your opinion? Thanks very much
>
>
>
> Best Wishes
>
> zengchen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Hi Zengchen,
>
>
>
> For now I was thinking just use Github and PR's outside of the OpenStack
> projects to bootstrap things and see how far we can get.  I'll update the
> BP this morning with what I believe to be the key tasks to work through.
>
>
>
> Thanks,
>
> John
>
>
>
>
> __
> OpenSt

Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread Spyros Trigazis
FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila
too. Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3]
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen  wrote:

> Hi John & Ben:
>  I have committed a patch[1] to add a new repository to Openstack. Please
> take a look at it. Thanks very much!
>
>  [1]: https://review.openstack.org/#/c/468635
>
> Best Wishes!
> zengchen
>
>
>
>
>
> 在 2017-05-26 21:30:48,"John Griffith"  写道:
>
>
>
> On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:
>
>>
>> Hi john:
>> I have seen your updates on the bp. I agree with your plan on how to
>> develop the codes.
>> However, there is one issue I have to remind you that at present,
>> Fuxi not only can convert
>>  Cinder volume to Docker, but also Manila file. So, do you consider to
>> involve Manila part of codes
>>  in the new Fuxi-golang?
>>
> Agreed, that's a really good and important point.  Yes, I believe Ben
> Swartzlander
>
> is interested, we can check with him and make sure but I certainly hope
> that Manila would be interested.
>
>> Besides, IMO, It is better to create a repository for Fuxi-golang, because
>>  Fuxi is the project of Openstack,
>>
> Yeah, that seems fine; I just didn't know if there needed to be any more
> conversation with other folks on any of this before charing ahead on new
> repos etc.  Doesn't matter much to me though.
>
>
>>
>>Thanks very much!
>>
>> Best Wishes!
>> zengchen
>>
>>
>>
>>
>> At 2017-05-25 22:47:29, "John Griffith"  wrote:
>>
>>
>>
>> On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:
>>
>>> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
>>> language.
>>> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>>>
>>>
>>> At 2017-05-25 19:46:54, "zengchen"  wrote:
>>>
>>> Hi guys:
>>> hongbin had committed a bp of rewriting Fuxi with go language[1]. My
>>> question is where to commit codes for it.
>>> We have two choice, 1. create a new repository, 2. create a new branch.
>>> IMO, the first one is much better. Because
>>> there are many differences in the layer of infrastructure, such as CI.
>>> What's your opinion? Thanks very much
>>>
>>> Best Wishes
>>> zengchen
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> Hi Zengchen,
>>
>> For now I was thinking just use Github and PR's outside of the OpenStack
>> projects to bootstrap things and see how far we can get.  I'll update the
>> BP this morning with what I believe to be the key tasks to work through.
>>
>> Thanks,
>> John
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread Spyros Trigazis
On 17 May 2017 at 13:58, Spyros Trigazis  wrote:

>
>
> On 17 May 2017 at 06:25, KiYoun Sung  wrote:
>
>> Hello,
>> Magnum team.
>>
>> I Installed Openstack newton and magnum.
>> I installed Magnum by source(master branch).
>>
>> I have two questions.
>>
>> 1.
>> After installation,
>> I created kubernetes cluster and it's CREATE_COMPLETE,
>> and I want to create kubernetes pod.
>>
>> My create script is below.
>> --
>> apiVersion: v1
>> kind: Pod
>> metadata:
>>   name: nginx
>>   labels:
>> app: nginx
>> spec:
>>   containers:
>>   - name: nginx
>> image: nginx
>> ports:
>> - containerPort: 80
>> --
>>
>> I tried "kubectl create -f nginx.yaml"
>> But, error has occured.
>>
>> Error message is below.
>> error validating "pod-nginx-with-label.yaml": error validating data:
>> unexpected type: object; if you choose to ignore these errors, turn
>> validation off with --validate=false
>>
>> Why did this error occur?
>>
>
> This is not related to magnum, it is related to your client. From where do
> you execute the
> kubectl create command? You computer? Some vm with a distributed file
> system?
>
>
>>
>> 2.
>> I want to access this kubernetes cluster service(like nginx) above the
>> Openstack magnum environment from outside world.
>>
>> I refer to this guide(https://docs.openstack.o
>> rg/developer/magnum/dev/kubernetes-load-balancer.html#how-it-works), but
>> it didn't work.
>>
>> Openstack: newton
>> Magnum: 4.1.1 (master branch)
>>
>> How can I do?
>> Do I must install Lbaasv2?
>>
>
> You need lbaas V2 with octavia preferably. Not sure what is the
> recommended way to install.
>

Have a look here:
https://docs.openstack.org/draft/networking-guide/config-lbaas.html

Cheers,
Spyros


>
>
>>
>> Thank you.
>> Best regards.
>>
>
> Cheers,
> Spyros
>
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread Spyros Trigazis
On 17 May 2017 at 06:25, KiYoun Sung  wrote:

> Hello,
> Magnum team.
>
> I Installed Openstack newton and magnum.
> I installed Magnum by source(master branch).
>
> I have two questions.
>
> 1.
> After installation,
> I created kubernetes cluster and it's CREATE_COMPLETE,
> and I want to create kubernetes pod.
>
> My create script is below.
> --
> apiVersion: v1
> kind: Pod
> metadata:
>   name: nginx
>   labels:
> app: nginx
> spec:
>   containers:
>   - name: nginx
> image: nginx
> ports:
> - containerPort: 80
> --
>
> I tried "kubectl create -f nginx.yaml"
> But, error has occured.
>
> Error message is below.
> error validating "pod-nginx-with-label.yaml": error validating data:
> unexpected type: object; if you choose to ignore these errors, turn
> validation off with --validate=false
>
> Why did this error occur?
>

This is not related to magnum, it is related to your client. From where do
you execute the
kubectl create command? You computer? Some vm with a distributed file
system?


>
> 2.
> I want to access this kubernetes cluster service(like nginx) above the
> Openstack magnum environment from outside world.
>
> I refer to this guide(https://docs.openstack.org/developer/magnum/dev/
> kubernetes-load-balancer.html#how-it-works), but it didn't work.
>
> Openstack: newton
> Magnum: 4.1.1 (master branch)
>
> How can I do?
> Do I must install Lbaasv2?
>

You need lbaas V2 with octavia preferably. Not sure what is the recommended
way to install.


>
> Thank you.
> Best regards.
>

Cheers,
Spyros


>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Spyros Trigazis
IMO, coe is a little confusing. It is a term used by people related somehow
to the magnum community. When I describe to users how to use magnum,
I spent a few moments explaining what we call coe.

I prefer one of the following:
* openstack magnum cluster create|delete|...
* openstack mcluster create|delete|...
* both the above

It is very intuitive for users because, they will be using an openstack
cloud
and they will be wanting to use the magnum service. So, it only make sense
to type openstack magnum cluster or mcluster which is shorter.


On 21 March 2017 at 02:24, Qiming Teng  wrote:

> On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> > On 03/20/2017 03:08 PM, Adrian Otto wrote:
> > >Team,
> > >
> > >Stephen Watson has been working on an magnum feature to add magnum
> commands to the openstack client by implementing a plugin:
> > >
> > >https://review.openstack.org/#/q/status:open+project:
> openstack/python-magnumclient+osc
> > >
> > >In review of this work, a question has resurfaced, as to what the
> client command name should be for magnum related commands. Naturally, we’d
> like to have the name “cluster” but that word is already in use by Senlin.
> >
> > Unfortunately, the Senlin API uses a whole bunch of generic terms as
> > top-level REST resources, including "cluster", "event", "action",
> > "profile", "policy", and "node". :( I've warned before that use of
> > these generic terms in OpenStack APIs without a central group
> > responsible for curating the API would lead to problems like this.
> > This is why, IMHO, we need the API working group to be ultimately
> > responsible for preventing this type of thing from happening.
> > Otherwise, there ends up being a whole bunch of duplication and same
> > terms being used for entirely different things.
> >
>
> Well, I believe the name and namespaces used by Senlin is very clean.
> Please see the following outputs. All commands are contained in the
> cluster namespace to avoid any conflicts with any other projects.
>
> On the other hand, is there any document stating that Magnum is about
> providing clustering service? Why Magnum cares so much about the top
> level noun if it is not its business?
>

>From magnum's wiki page [1]:
"Magnum uses Heat to orchestrate an OS image which contains Docker
and Kubernetes and runs that image in either virtual machines or bare
metal in a *cluster* configuration."

Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
but we can't refer to a collection of virtual machines or physical servers
with
another name. Bay proven to be confusing to users. I don't think that magnum
should reserve the cluster noun, even if it was available.

[1] https://wiki.openstack.org/wiki/Magnum


>
>
> $ openstack --help | grep cluster
>
>   --os-clustering-api-version 
>
>   cluster action list  List actions.
>   cluster action show  Show detailed info about the specified action.
>   cluster build info  Retrieve build information.
>   cluster check  Check the cluster(s).
>   cluster collect  Collect attributes across a cluster.
>   cluster create  Create the cluster.
>   cluster delete  Delete the cluster(s).
>   cluster event list  List events.
>   cluster event show  Describe the event.
>   cluster expand  Scale out a cluster by the specified number of nodes.
>   cluster list   List the user's clusters.
>   cluster members add  Add specified nodes to cluster.
>   cluster members del  Delete specified nodes from cluster.
>   cluster members list  List nodes from cluster.
>   cluster members replace  Replace the nodes in a cluster with
>   specified nodes.
>   cluster node check  Check the node(s).
>   cluster node create  Create the node.
>   cluster node delete  Delete the node(s).
>   cluster node list  Show list of nodes.
>   cluster node recover  Recover the node(s).
>   cluster node show  Show detailed info about the specified node.
>   cluster node update  Update the node.
>   cluster policy attach  Attach policy to cluster.
>   cluster policy binding list  List policies from cluster.
>   cluster policy binding show  Show a specific policy that is bound to
>   the specified cluster.
>   cluster policy binding update  Update a policy's properties on a
>   cluster.
>   cluster policy create  Create a policy.
>   cluster policy delete  Delete policy(s).
>   cluster policy detach  Detach policy from cluster.
>   cluster policy list  List policies that meet the criteria.
>   cluster policy show  Show the policy details.
>   cluster policy type list  List the available policy types.
>   cluster policy type show  Get the details about a policy type.
>   cluster policy update  Update a policy.
>   cluster policy validate  Validate a policy.
>   cluster profile create  Create a profile.
>   cluster profile delete  Delete profile(s).
>   cluster profile list  List profiles that meet the criteria.
>   cluster profile show  Show profile details.
>   cluster profile type list  List t

Re: [openstack-dev] [magnum] [ocata] after installation, magnum is not found

2017-03-09 Thread Spyros Trigazis
Hi,

You haven't installed the magnum client. The service is running
but you don't have the client.

You need the client installed and to create and source the RC file.

Spyros

On 9 March 2017 at 07:49, Yu Wei  wrote:

> Hi guys,
>
> After installing openstack ocata magnum, magnum is not found.
>
> However, magnum-api and magnum-conduct are running well.
>
> How could I fix such problem? Is this bug in ocata?
>
>
> [root@controller bin]# systemctl status openstack-magnum-api.service
> openstack-magnum-conductor.service
> ● openstack-magnum-api.service - OpenStack Magnum API Service
>Loaded: loaded (/usr/lib/systemd/system/openstack-magnum-api.service;
> enabled; vendor preset: disabled)
>Active: active (running) since Thu 2017-03-09 11:51:33 CST; 13min ago
>  Main PID: 16195 (magnum-api)
>CGroup: /system.slice/openstack-magnum-api.service
>└─16195 /usr/bin/python2 /usr/bin/magnum-api
>
> Mar 09 11:51:33 controller systemd[1]: Started OpenStack Magnum API
> Service.
> Mar 09 11:51:33 controller systemd[1]: Starting OpenStack Magnum API
> Service...
> Mar 09 11:51:34 controller magnum-api[16195]: 2017-03-09 11:51:34.646
> 16195 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now
> registers SIGUSR1 and SIGUSR2 by default for...rate reports.
> Mar 09 11:51:34 controller magnum-api[16195]: 2017-03-09 11:51:34.647
> 16195 INFO magnum.api.app [-] Full WSGI config used:
> /etc/magnum/api-paste.ini
> Mar 09 11:51:34 controller magnum-api[16195]: 2017-03-09 11:51:34.751
> 16195 WARNING keystonemiddleware.auth_token [-] AuthToken middleware is set
> with keystone_authtoken.service_token_role...this to True.
> Mar 09 11:51:34 controller magnum-api[16195]: 2017-03-09 11:51:34.762
> 16195 INFO magnum.cmd.api [-] Starting server in PID 16195
> Mar 09 11:51:34 controller magnum-api[16195]: 2017-03-09 11:51:34.767
> 16195 INFO magnum.cmd.api [-] Serving on http://192.168.111.20:9511
> Mar 09 11:51:34 controller magnum-api[16195]: 2017-03-09 11:51:34.767
> 16195 INFO magnum.cmd.api [-] Server will handle each request in a new
> process up to 2 concurrent processes
> Mar 09 11:51:34 controller magnum-api[16195]: 2017-03-09 11:51:34.768
> 16195 INFO werkzeug [-]  * Running on http://192.168.111.20:9511/
>
> ● openstack-magnum-conductor.service - Openstack Magnum Conductor Service
>Loaded: loaded (/usr/lib/systemd/system/openstack-magnum-conductor.service;
> enabled; vendor preset: disabled)
>Active: active (running) since Thu 2017-03-09 11:51:33 CST; 13min ago
>  Main PID: 16200 (magnum-conducto)
>CGroup: /system.slice/openstack-magnum-conductor.service
>└─16200 /usr/bin/python2 /usr/bin/magnum-conductor
>
> Mar 09 11:51:33 controller systemd[1]: Started Openstack Magnum Conductor
> Service.
> Mar 09 11:51:33 controller systemd[1]: Starting Openstack Magnum Conductor
> Service...
> Mar 09 11:51:34 controller magnum-conductor[16200]: 2017-03-09
> 11:51:34.640 16200 WARNING oslo_reports.guru_meditation_report [-] Guru
> meditation now registers SIGUSR1 and SIGUSR2 by defau...rate reports.
> Mar 09 11:51:34 controller magnum-conductor[16200]: 2017-03-09
> 11:51:34.640 16200 INFO magnum.cmd.conductor [-] Starting server in PID
> 16200
> Mar 09 11:51:34 controller magnum-conductor[16200]:
> /usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py:200:
> FutureWarning: The access_policy argument is changing its default value to
>  Mar 09 11:51:34 controller magnum-conductor[16200]: access_policy)
> Mar 09 11:51:34 controller magnum-conductor[16200]: 2017-03-09
> 11:51:34.648 16200 INFO oslo_messaging.server [-] blocking executor handles
> only one message at once. threading or eventlet e... recommended.
> Hint: Some lines were ellipsized, use -l to show in full.
> [root@controller bin]# magnum service-list
> bash: magnum: command not found...
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Swarm Mode template

2017-01-31 Thread Spyros Trigazis
Hi,

The hack-ish way is to check if the current master has a different ip than
the
swarm_api_api and based on that decide whether to swarm init or join. The
proper way is to have two resource groups (as you said) one for the primary
master and one for the secondary masters. This requires some plumping
though.

We decided two have a _v2 driver in /contrib initially. I have a prototype
working
based on fedora-25 (docker 1.12.6). I can push and work together on it, if
you
want.

Spyros

On 31 January 2017 at 20:52, Kevin Lefevre  wrote:

> On Tue, 2017-01-31 at 17:01 +0100, Spyros Trigazis wrote:
> > Hi.
> >
> > I have done it by checking the ip address of the master. The current
> > state of
> > the heat drivers doesn't allow the distinction between master > 1 or
> > master=1.
> >
>
> Please, could you elaborate on this ?
>
> Also what is your opinion about starting a new swarm driver for swarm
> mode ?
>
> > Spyros
> >
> >
> >
> > On 31 January 2017 at 16:33, Kevin Lefevre 
> > wrote:
> > > Hi, Docker 1.13 has been released with several improvements that
> > > brings
> > > swarm mode principles closer to Kubernetes such as docker-compose
> > > service swarm mode.
> > >
> > > I'd like to implement a v2 swarm template. I don't know if it's
> > > already
> > > been discussed.
> > >
> > > Swarm mode is a bit different but a lot simpler to deploy than
> > > Swarm
> > > Legacy.
> > >
> > > In Kubernetes you can deploy multiples masters at the same time but
> > > in
> > > swarm mode you have to:
> > > - bootstrap a first docker node
> > > - run docker swarm init
> > > - get a token (worker or manager)
> > > - bootstrap other worker
> > > - use manager or worker token depending manager count.
> > >
> > > I don't know what is the best way to do so in HEAT. I'm sure there
> > > are
> > > multiple options (I'm not an expert in HEAT i don't know if they
> > > are
> > > feasible) :
> > >
> > > - Bootstrap a first server
> > > - Wait for it to ready, run docker swarm init, get both manager and
> > > worker tokens
> > > - if manager count >1, we can bootstrap another resource group for
> > > extra managers which will use a manager token.
> > > - Bootstrap the rest of the worker and use a worker token.
> > >
> > > The difficulty is to handle multiples master properly, i'd like to
> > > hear
> > > your ideas about that.
> > >
> > >
> > > --
> > > Kevin Lefevre
> > >
> > > ___
> > > ___
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > > bscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > _
> > _
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> > cribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Kevin Lefevre
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Swarm Mode template

2017-01-31 Thread Spyros Trigazis
Hi.

I have done it by checking the ip address of the master. The current state
of
the heat drivers doesn't allow the distinction between master > 1 or
master=1.

Spyros



On 31 January 2017 at 16:33, Kevin Lefevre  wrote:

> Hi, Docker 1.13 has been released with several improvements that brings
> swarm mode principles closer to Kubernetes such as docker-compose
> service swarm mode.
>
> I'd like to implement a v2 swarm template. I don't know if it's already
> been discussed.
>
> Swarm mode is a bit different but a lot simpler to deploy than Swarm
> Legacy.
>
> In Kubernetes you can deploy multiples masters at the same time but in
> swarm mode you have to:
> - bootstrap a first docker node
> - run docker swarm init
> - get a token (worker or manager)
> - bootstrap other worker
> - use manager or worker token depending manager count.
>
> I don't know what is the best way to do so in HEAT. I'm sure there are
> multiple options (I'm not an expert in HEAT i don't know if they are
> feasible) :
>
> - Bootstrap a first server
> - Wait for it to ready, run docker swarm init, get both manager and
> worker tokens
> - if manager count >1, we can bootstrap another resource group for
> extra managers which will use a manager token.
> - Bootstrap the rest of the worker and use a worker token.
>
> The difficulty is to handle multiples master properly, i'd like to hear
> your ideas about that.
>
>
> --
> Kevin Lefevre
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-24 Thread Spyros Trigazis
Or start writing down (in the BP) what you want to put in the driver.
Network, lbaas, scripts, the order of the scripts and then we can see
if it's possible to adapt to the current coreos driver.

Spyros

On Jan 24, 2017 22:54, "Hongbin Lu"  wrote:

> As Spyros mentioned, an option is to start by cloning the existing
> templates. However, I have a concern for this approach because it will
> incur a lot of duplication. An alternative approach is modifying the
> existing CoreOS templates in-place. It might be a little difficult to
> implement but it saves your overhead to deprecate the old version and roll
> out the new version.
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Spyros Trigazis [mailto:strig...@gmail.com]
> *Sent:* January-24-17 3:47 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] CoreOS template v2
>
>
>
> Hi.
>
>
>
> IMO, you should add a BP and start by adding a v2 driver in /contrib.
>
>
>
> Cheers,
>
> Spyros
>
>
>
> On Jan 24, 2017 20:44, "Kevin Lefevre"  wrote:
>
> Hi,
>
> The CoreOS template is not really up to date and in sync with upstream
> CoreOS « Best Practice » (https://github.com/coreos/coreos-kubernetes),
> it is more a port of th fedora atomic template but CoreOS has its own
> Kubernetes deployment method.
>
> I’d like to implement the changes to sync kubernetes deployment on CoreOS
> to latest kubernetes version (1.5.2) along with standards components
> according the CoreOS Kubernetes guide :
>   - « Defaults » add ons like kube-dns , heapster and kube-dashboard
> (kube-ui has been deprecated for a long time and is obsolete)
>   - Canal for network policy (Calico and Flannel)
>   - Add support for RKT as container engine
>   - Support sane default options recommended by Kubernetes upstream
> (admission control : https://kubernetes.io/docs/
> admin/admission-controllers/, using service account…)
>   - Of course add every new parameters to HOT.
>
> These changes are difficult to implement as is (due to the fragment
> concept and everything is a bit messy between common and specific template
> fragment, especially for CoreOS).
>
> I’m wondering if it is better to clone the CoreOS v1 template to a new v2
> template en build from here ?
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-24 Thread Spyros Trigazis
Hi.

IMO, you should add a BP and start by adding a v2 driver in /contrib.

Cheers,
Spyros

On Jan 24, 2017 20:44, "Kevin Lefevre"  wrote:

> Hi,
>
> The CoreOS template is not really up to date and in sync with upstream
> CoreOS « Best Practice » (https://github.com/coreos/coreos-kubernetes),
> it is more a port of th fedora atomic template but CoreOS has its own
> Kubernetes deployment method.
>
> I’d like to implement the changes to sync kubernetes deployment on CoreOS
> to latest kubernetes version (1.5.2) along with standards components
> according the CoreOS Kubernetes guide :
>   - « Defaults » add ons like kube-dns , heapster and kube-dashboard
> (kube-ui has been deprecated for a long time and is obsolete)
>   - Canal for network policy (Calico and Flannel)
>   - Add support for RKT as container engine
>   - Support sane default options recommended by Kubernetes upstream
> (admission control : https://kubernetes.io/docs/
> admin/admission-controllers/, using service account…)
>   - Of course add every new parameters to HOT.
>
> These changes are difficult to implement as is (due to the fragment
> concept and everything is a bit messy between common and specific template
> fragment, especially for CoreOS).
>
> I’m wondering if it is better to clone the CoreOS v1 template to a new v2
> template en build from here ?
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][devstack][rally][python-novaclient][magnum] switching to keystone v3 by default

2016-12-01 Thread Spyros Trigazis
I think for magnum we are OK.

This job [1] finished using keystone v3 [2]

Spyros

[1]
http://logs.openstack.org/93/400593/9/check/gate-functional-dsvm-magnum-api/93e8c14/
[2]
http://logs.openstack.org/93/400593/9/check/gate-functional-dsvm-magnum-api/93e8c14/logs/devstacklog.txt.gz#_2016-12-01_11_32_58_033

On 1 December 2016 at 12:26, Davanum Srinivas  wrote:

> It has taken years to get here with a lot of work from many folks.
>
> -1 for Any revert!
>
> https://etherpad.openstack.org/p/v3-only-devstack
> http://markmail.org/message/aqq7itdom36omnf6
> https://review.openstack.org/#/q/status:merged+project:
> openstack-dev/devstack+branch:master+topic:bp/keystonev3
>
> Thanks,
> Dims
>
> On Thu, Dec 1, 2016 at 5:38 AM, Andrey Kurilin 
> wrote:
> > Hi folks!
> >
> > Today devstack team decided to switch to keystone v3 by default[0].
> > Imo, it is important thing, but it was made in silent, so other project
> was
> > unable to prepare to that change. Also, proposed way to select Keystone
> API
> > version via devstack configuration doesn't work(IDENTITY_API_VERSION
> > variable doesn't work [1] ).
> >
> > Switching to keystone v3 broke at least Rally and Magnum(based on
> comment to
> > [0])  gates. Also, python-novaclient has two separate jobs for checking
> > compatibility with keystone V2 and V3. One of these jobs became
> redundant.
> >
> > That is why I submitted a revert [2] .
> >
> > PS: Please, do not make such changes in silent!
> >
> > [0] - https://review.openstack.org/#/c/386183
> > [1] -
> > https://github.com/openstack-infra/project-config/blob/
> master/jenkins/jobs/rally.yaml#L70-L74
> > [2] - https://review.openstack.org/405264
> >
> > --
> > Best regards,
> > Andrey Kurilin.
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Managing cluster drivers as individual distro packages

2016-11-18 Thread Spyros Trigazis
Hi all,

In magnum, we implement cluster drivers for the different combinations
of COEs (Container Orchestration Engines) and Operating Systems. The
reasoning behind it is to better encapsulate driver-specific logic and to
allow
operators deploy custom drivers with their deployment specific changes.

For example, operators might want to:
* have only custom drivers and not install the upstream ones at all
* offer user only some of the available drivers
* create different combinations of  COE + os_distro
* create new experimental/staging drivers

It would be reasonable to manage magnum's cluster drivers as different
packages, since they are designed to be treated as individual entities. To
do
so, we have two options:

1. in-tree:  remove the entrypoints from magnum/setup.cfg to not install
them
by default. This will require some plumbing to manage them like separate
python
packages, but allows magnum's development team to manage the official
drivers
inside the service repo.

2. separate repo: This option sounds cleaner, but requires more refactoring
and
will separate more the drivers from service, having significant impact in
the
development process.

Thoughts?

Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-08 Thread Spyros Trigazis
+1 for both

Cheers,
Spyros

On 8 November 2016 at 03:34, Yuanying OTSUKA  wrote:

> +1 for both.
>
> Best regards
> -yuanying
>
> 2016年11月8日(火) 4:23 Hongbin Lu :
>
>> +1!
>>
>> Both jvgrant and yatin contributed a lot to the Magnum project. It would
>> be great to have both of you in the core team.
>>
>> Best regards,
>> Hongbin
>>
>> > -Original Message-
>> > From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>> > Sent: November-07-16 2:06 PM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: [openstack-dev] [Magnum] New Core Reviewers
>> >
>> > Magnum Core Team,
>> >
>> > I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum
>> > Core Reviewers. Please respond with your votes.
>> >
>> > Thanks,
>> >
>> > Adrian Otto
>> > ___
>> > ___
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: OpenStack-dev-
>> > requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [magnum] Subjects to discuss during the summit

2016-10-10 Thread Spyros Trigazis
Hi Sergey,

I have seen the session, I wanted to add more details to
start the discussion earlier and to be better prepared.

Thanks,
Spyros


On 10 October 2016 at 17:36, Sergey Kraynev  wrote:

> Hi Spyros,
>
> AFAIK we already have special session slot related with your topic.
> So thank you for the providing all items here.
> Rabi, can we add link on this mail to etherpad ? (it will save our time
> during session :) )
>
> On 10 October 2016 at 18:11, Spyros Trigazis  wrote:
>
>> Hi heat and magnum.
>>
>> Apart from the scalability issues that have been observed, I'd like to
>> add few more subjects to discuss during the summit.
>>
>> 1. One nested stack per node and linear scale of cluster creation
>> time.
>>
>> 1.1
>> For large stacks, the creation of all nested stack scales linearly. We
>> haven't run any tested using the convergence-engine.
>>
>> 1.2
>> For large stacks, 1000 nodes, the final call to heat to fetch the
>> IPs for all nodes takes 3 to 4 minutes. In heat, the stack has status
>> CREATE_COMPLETE but magnum's state is updated when this long final
>> call is done. Can we do better? Maybe fetch only the master IPs or
>> get he IPs in chunks.
>>
>> 1.3
>> After the stack create API call to heat, magnum's conductor
>> busy-waits heat with a thread/cluster. (In case of a magnum conductor
>> restart, we lose that thread and we can't update the status in
>> magnum). Investigate better ways to sync the status between magnum
>> and heat.
>>
>> 2. Next generation magnum clusters
>>
>> A need that comes up frequently in magnum is heterogeneous clusters.
>> * We want to able to create cluster on different hardware, (e.g. spawn
>>   vms on nodes with SSDs and nodes without SSDs or other special
>>   hardware available only in some nodes of the cluster FPGA, GPU)
>> * Spawn cluster across different AZs
>>
>> I'll describe briefly our plan here, for further information we have a
>> detailed spec under review. [1]
>>
>> To address this issue we introduce the node-group concept in magnum.
>> Each node-group will correspond to a different heat stack. The master
>> nodes can be organized in one or more stacks, so as the worker nodes.
>>
>> We investigate how to implement this feature. We consider the
>> following:
>> At the moment, we have three template files, cluster, master and
>> node, and all three template files create one stack. The new
>> generation of clusters will have a cluster stack containing
>> the resources in the cluster template, specifically, networks, lbaas
>> floating-ips etc. Then, the output of this stack would be passed as
>> input to create the master node stack(s) and the worker nodes
>> stack(s).
>>
>> 3. Use of heat-agent
>>
>> A missing feature in magnum is the lifecycle operations in magnum. For
>> restart of services and COE upgrades (upgrade docker, kubernetes and
>> mesos) we consider using the heat-agent. Another option is to create a
>> magnum agent or daemon like trove.
>>
>> 3.1
>> For restart, a few systemctl restart or service restart commands will
>> be issued. [2]
>>
>> 3.2
>> For upgrades there are three scenarios:
>> 1. Upgrade a service which runs in a container. In this case, a small
>>script that runs in each node is sufficient. No vm reboot required.
>> 2. For an ubuntu based image or similar that requires a package upgrade
>>a similar small script is sufficient too. No vm reboot required.
>> 3. For our fedora atomic images, we need to perform a rebase on the
>>rpm-ostree files system which requires a reboot.
>> 4. Finally, a thought under investigation is replacing the nodes one
>>by one using a different image. e.g. Upgrade from fedora 24 to 25
>>with new versions of packages all in a new qcow2 image. How could
>>we update the stack for this?
>>
>> Options 1. and 2. can be done by upgrading all worker nodes at once or
>> one by one. Options 3. and 4. should be done one by one.
>>
>> I'm drafting a spec about upgrades, should be ready by Wednesday.
>>
>> Cheers,
>> Spyros
>>
>> [1] https://review.openstack.org/#/c/352734/
>> [2] https://review.openstack.org/#/c/368981/
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-b

[openstack-dev] [heat] [magnum] Subjects to discuss during the summit

2016-10-10 Thread Spyros Trigazis
Hi heat and magnum.

Apart from the scalability issues that have been observed, I'd like to
add few more subjects to discuss during the summit.

1. One nested stack per node and linear scale of cluster creation
time.

1.1
For large stacks, the creation of all nested stack scales linearly. We
haven't run any tested using the convergence-engine.

1.2
For large stacks, 1000 nodes, the final call to heat to fetch the
IPs for all nodes takes 3 to 4 minutes. In heat, the stack has status
CREATE_COMPLETE but magnum's state is updated when this long final
call is done. Can we do better? Maybe fetch only the master IPs or
get he IPs in chunks.

1.3
After the stack create API call to heat, magnum's conductor
busy-waits heat with a thread/cluster. (In case of a magnum conductor
restart, we lose that thread and we can't update the status in
magnum). Investigate better ways to sync the status between magnum
and heat.

2. Next generation magnum clusters

A need that comes up frequently in magnum is heterogeneous clusters.
* We want to able to create cluster on different hardware, (e.g. spawn
  vms on nodes with SSDs and nodes without SSDs or other special
  hardware available only in some nodes of the cluster FPGA, GPU)
* Spawn cluster across different AZs

I'll describe briefly our plan here, for further information we have a
detailed spec under review. [1]

To address this issue we introduce the node-group concept in magnum.
Each node-group will correspond to a different heat stack. The master
nodes can be organized in one or more stacks, so as the worker nodes.

We investigate how to implement this feature. We consider the
following:
At the moment, we have three template files, cluster, master and
node, and all three template files create one stack. The new
generation of clusters will have a cluster stack containing
the resources in the cluster template, specifically, networks, lbaas
floating-ips etc. Then, the output of this stack would be passed as
input to create the master node stack(s) and the worker nodes
stack(s).

3. Use of heat-agent

A missing feature in magnum is the lifecycle operations in magnum. For
restart of services and COE upgrades (upgrade docker, kubernetes and
mesos) we consider using the heat-agent. Another option is to create a
magnum agent or daemon like trove.

3.1
For restart, a few systemctl restart or service restart commands will
be issued. [2]

3.2
For upgrades there are three scenarios:
1. Upgrade a service which runs in a container. In this case, a small
   script that runs in each node is sufficient. No vm reboot required.
2. For an ubuntu based image or similar that requires a package upgrade
   a similar small script is sufficient too. No vm reboot required.
3. For our fedora atomic images, we need to perform a rebase on the
   rpm-ostree files system which requires a reboot.
4. Finally, a thought under investigation is replacing the nodes one
   by one using a different image. e.g. Upgrade from fedora 24 to 25
   with new versions of packages all in a new qcow2 image. How could
   we update the stack for this?

Options 1. and 2. can be done by upgrading all worker nodes at once or
one by one. Options 3. and 4. should be done one by one.

I'm drafting a spec about upgrades, should be ready by Wednesday.

Cheers,
Spyros

[1] https://review.openstack.org/#/c/352734/
[2] https://review.openstack.org/#/c/368981/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Magnum:

2016-10-03 Thread Spyros Trigazis
Hi Kamal.

On 3 October 2016 at 09:33, kamalakannan sanjeevan <
chirukamalakan...@gmail.com> wrote:

> Hi All,
>
> I have installed Mitaka on ubuntu14.04. I have tried an all in one
> installation along with cinder using dd and then creating the
> cinder-volumes at /dev/loop2. The network neutron is using linuxbridge with
> vxlan.
>
> I am able to create instances that do not have internet reachability for
> same reason.
>

You must have Internet connection in the VMs, For example in swarm, the
swarm container is fetched from docker-hub on cluster creation,


>
> I have then install magnum and the python-magnum-client
>
> I get the below error as shown in the logs
>
> Service list displays after installing python-magnumclient 2.3.0, with
> full path only.
>

Magnum mitaka is incompatible with magnumclient 2.3.0. Better use magnum
newton.


>
> I did follow the magnum installation using http://docs.openstack.org/
> project-install-guide/container-infrastructure-management/draft/install-
> ubuntu.html
>

This guide which includes draft in the url is intended for magnum from the
master branch, for newton should be [newton-install] (currently master and
newton are the same) and for mitaka there is NO install guide.

[newton-install]
http://docs.openstack.org/project-install-guide/container-infrastructure-management/newton/install-ubuntu.html


>
> The certificates are used *x509keypair* on mitaka
>

This feature is available only from magnum Newton.


>
> root@VFSR1:/opt/mesos_image# 
> /opt/python-magnumclient/.magnumclient-env/bin/magnum
> service-list
> ++-+--+---+-
> -+-+---+
> ---+
> | id | host| binary   | state | disabled |
> disabled_reason | created_at| updated_at|
> ++-+--+---+-
> -+-+---+
> ---+
> | 1  | VFSR1.svcmgr.io | magnum-conductor | up|  |
> -   | 2016-09-30T05:24:19+00:00 | 2016-10-03T06:58:44+00:00 |
> ++-+--+---+-
> -+-+---+
> ---+
>
> Images available as below
>
> root@VFSR1:/opt/mesos_image# glance image-list
> +--+--+
> | ID   | Name |
> +--+--+
> | c1c8e84e-12ba-4b05-b382-e57850e5dd6d | cirros   |
> | affb50c2-ca04-41fa-bf73-48ae526d2b15 | fedora-atomic-latest |
> | 94ee6d6e-93fa-47b2-844f-2d8d2ad1a788 | ubuntu-14.04.3-mesos |
> | f9acd880-f50f-493a-b6ed-46620b7b3481 | ubuntu-mesos |
> +--+--+
>
> DNS configured on this machine
>
> root@VFSR1:/opt/mesos_image# cat /etc/resolv.conf
> # Dynamic resolv.conf(5) file for glibc resolver(3) generated by
> resolvconf(8)
> # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
> nameserver 172.27.10.76
> nameserver 172.27.0.5
> search svcmgr.io
>
>
> key pair and network list on this machine
>
> root@VFSR1:/opt/mesos_image# openstack keypair create --public-key
> ~/.ssh/id_rsa.pub testkey
> +-+-+
> | Field   | Value   |
> +-+-+
> | fingerprint | e0:9f:b5:91:e5:e4:39:90:c3:7d:7e:9a:ff:55:e3:29 |
> | name| testkey |
> | user_id | 3bb731e1886347a19e90c06185be8a9c|
> +-+-+
>
> root@VFSR1:/opt/mesos_image# openstack network list
> +--+
> ---+
> --+
> | ID   | Name
> | Subnets  |
> +--+
> ---+
> --+
> | 02bb0e68-1454-49ba-a40b-98130f58d9f6 | private
> | 9e5dfec3-7394-4ffc-b2c9-b24110b6d495 |
> | 555fbf56-e7ac-40ef-96cb-573a862ae42f | private1
> | 9abedee6-4c3c-4edc-a0cd-15571bc2ce51 |
> | 9ea39255-9e51-433f-95a1-cb8cf51543ea | public
> | 436bb0a4-e999-4874-844c-567e6312fe3e |
> | 069923b6-f657-4fca-8c5a-e0262c52f8c7 | public1
> | 1a8dad61-3261-41a4-86c2-7ad107fd78cb |
> | a91b3943-ac8b-41ca-9767-ad9cf2c1dc60 | 
> swarm-cluster-zhxyvth46o5c-fixed_network-xaz6nx43ec5e
> | 61789da1-17c9-431e-b728-22c4b923fd53 |
> +--+
> ---+
> --+
>
> Volumes and cinder on the machine
> 

Re: [openstack-dev] Seeking Mobile Beta Testers

2016-09-03 Thread Spyros Trigazis
Hi.

count me in, Samsung Galaxy A3 Android 5.0.2.

Cheers,
Spyros

On 3 September 2016 at 11:31, Fawaz Mohammed 
wrote:

> Interested, Android version 6.0.1
>
> On Sep 3, 2016 9:09 AM, "Swapnil Kulkarni"  wrote:
>
>> Count me in for Android beta testing.
>>
>> On Sep 2, 2016 11:31 PM, "Jimmy McArthur"  wrote:
>> >
>> > We're looking for a handful of community members to test our updated
>> OpenStack Summit mobile Apps. If you're interested, shoot me a note, along
>> with iOS/Android preference, and we'll get you set up.
>> >
>> > Thank you,
>> > Jimmy McArthur
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub

2016-08-08 Thread Spyros Trigazis
Hello team,

I just acquired the openstackmagnum account [1] on docker Hub. It's an
organization
account so all core team members can be owners. Cores, please share with me
your
docker Hub ID or registered e-mail and I'll add you. I already added Adrian
and Egor.

In organization accounts we can have different teams with different
permissions. [2]

Cheers,
Spyros

[1] https://hub.docker.com/u/openstackmagnum/
[2] https://docs.docker.com/docker-hub/orgs/

On 5 August 2016 at 18:12, Steven Dake (stdake)  wrote:

> Tango,
>
> Sorry to hear that, but glad I could help clarify things :)
>
> Regards
> -steve
>
> From: Ton Ngo 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Friday, August 5, 2016 at 7:38 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [docker] [magnum] Magnum account on Docker
> Hub
>
> Thanks Steve, Spyros. I checked with Docker Hub support and the "magnum"
> account is not registered to Steve,
> so we will just use the new account "openstackmagnum".
> Ton,
>
> [image: Inactive hide details for Spyros Trigazis ---08/02/2016 09:27:38
> AM---I just filed a ticket to acquire the username openstackma]Spyros
> Trigazis ---08/02/2016 09:27:38 AM---I just filed a ticket to acquire the
> username openstackmagnum. I included Hongbin's contact informat
>
> From: Spyros Trigazis 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 08/02/2016 09:27 AM
> Subject: Re: [openstack-dev] [docker] [magnum] Magnum account on Docker
> Hub
> --
>
>
>
> I just filed a ticket to acquire the username openstackmagnum.
>
> I included Hongbin's contact information explaining that he's the
> project's PTL.
>
> Thanks Steve,
> Spyros
>
>
> On 2 August 2016 at 13:29, Steven Dake (stdake) <*std...@cisco.com*
> > wrote:
>
>Ton,
>
>I may or may not have set it up early in Magnum's development.  I just
>don't remember.  My recommendation is to file a support ticket with docker
>and see if they will tell you who it belongs to (as in does it belong to
>one of the founders of Magnum) or if it belongs to some other third party.
>Their support is very fast.  They may not be able to give you the answer if
>its not an openstacker.
>
>Regards
>-steve
>
>
>*From: *Ton Ngo <*t...@us.ibm.com* >
> * Reply-To: *"OpenStack Development Mailing List (not for usage
>questions)" <*openstack-dev@lists.openstack.org*
>>
> * Date: *Monday, August 1, 2016 at 1:06 PM
> * To: *OpenStack Development Mailing List <
>*openstack-dev@lists.openstack.org* 
>>
> * Subject: *[openstack-dev] [docker] [magnum] Magnum account on Docker Hub
>Hi everyone,
>  At the last IRC meeting, the team discussed the need for hosting
>  some container images on Docker Hub
>  to facilitate development. There is currently a Magnum account
>  on Docker Hub, but this is not owned by anyone
>  on the team, so we would like to find who the owner is and
>  whether this account was set up for OpenStack Magnum.
>  Thanks in advance!
>  Ton Ngo,
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>*openstack-dev-requ...@lists.openstack.org?subject:unsubscribe*
><http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*
><http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] devstack magnum.conf

2016-08-05 Thread Spyros Trigazis
Hi,

better follow the quickstart guide [1].

Cheers,
Spyros

[1] http://docs.openstack.org/developer/magnum/dev/quickstart.html

On 5 August 2016 at 06:22, Yasemin DEMİRAL (BİLGEM BTE) <
yasemin.demi...@tubitak.gov.tr> wrote:

>
> Hi
>
> I try to magnum on devstack, in the manual  Configure magnum: section
> has sudo cp etc/magnum/magnum.conf.sample /etc/magnum/magnum.conf command,
> but there is no magnum.conf.
>  What should i do ?
>
> Thanks
>
> Yasemin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub

2016-08-02 Thread Spyros Trigazis
I just filed a ticket to acquire the username openstackmagnum.

I included Hongbin's contact information explaining that he's the project's
PTL.

Thanks Steve,
Spyros


On 2 August 2016 at 13:29, Steven Dake (stdake)  wrote:

> Ton,
>
> I may or may not have set it up early in Magnum's development.  I just
> don't remember.  My recommendation is to file a support ticket with docker
> and see if they will tell you who it belongs to (as in does it belong to
> one of the founders of Magnum) or if it belongs to some other third party.
> Their support is very fast.  They may not be able to give you the answer if
> its not an openstacker.
>
> Regards
> -steve
>
>
> From: Ton Ngo 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, August 1, 2016 at 1:06 PM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [docker] [magnum] Magnum account on Docker Hub
>
> Hi everyone,
> At the last IRC meeting, the team discussed the need for hosting some
> container images on Docker Hub
> to facilitate development. There is currently a Magnum account on Docker
> Hub, but this is not owned by anyone
> on the team, so we would like to find who the owner is and whether this
> account was set up for OpenStack Magnum.
> Thanks in advance!
> Ton Ngo,
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Is LBAAS mandatory for MAGNUM ?

2016-07-18 Thread Spyros Trigazis
Hi Greg,

lbaas *v1* is required for magnum mitaka.

Cheers,
Spyros

On 18 July 2016 at 16:16, Waines, Greg  wrote:

> Thanks Madhuri,
>
>
>
> This blueprint is ‘Accepted for Newton’.
>
> So in ‘Mitaka’ and before, LBAAS is required for Magnum ?
>
>
>
> Greg.
>
>
>
> *From: *"Kumari, Madhuri" 
> *Reply-To: *"openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>
> *Date: *Monday, July 18, 2016 at 10:07 AM
> *To: *"openstack-dev@lists.openstack.org" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [Magnum] Is LBAAS mandatory for MAGNUM ?
>
>
>
> Hi Greg,
>
>
>
> Now it is not mandatory to have lbaas in Magnum. Here is blueprint in
> Magnum that aims to decouple lbaas from  Magnum
> https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas.
>
> You can use flag –master-lb-enabled in baymodel to specify whether you
> want lbaas or not. However it just allows you to disable lbaas when master
> count is 1.
>
>
>
> Regards,
>
> Madhuri
>
>
>
> *From:* Waines, Greg [mailto:greg.wai...@windriver.com]
> *Sent:* Monday, July 18, 2016 5:11 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* [openstack-dev] [Magnum] Is LBAAS mandatory for MAGNUM ?
>
>
>
> I’m relatively new to looking at Magnum.
>
> Just recently played with Magnum in devstack on Newton.
>
> I noticed that the HEAT Stack used by Magnum created Load Balancer Pool
> and Load Balancer HealthMonitor.
>
>
>
> QUESTION … Is LBAAS support mandatory for MAGNUM ?  or can it be used
> (configured) without it ?
>
>
>
> i.e. if the OpenStack distribution being used does NOT support LBAAS, will
> MAGNUM work ?   will it still be useful ?
>
>
>
> ( … thinking that it could still be used, although would not support the
> load balancing across scaled or multiple instances of a container … )
>
>
>
> Greg.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] [install-guide] Move launch-instance instructions in project repos

2016-06-28 Thread Spyros Trigazis
Hi.

I'd like to propose to move the "launch-instance" section [1] in project
repos along with the install-guide. If we won't move it, we must find an
appropriate place for it.

Cheers,
Spyros

[1]
http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Ceilometer and Aodh install guide(s)

2016-06-28 Thread Spyros Trigazis
+1 on the modular approach by Rodrigo Caballero

I'm writing magnum's guide and I'm working on the debian guide. Debian's
guide will have a couple of differences and I plan to move them in other
files or/and break the existing common config files.

IMO, one of the goals of the project specific guides was to let teams decide
what works for them. If the output guide is similar to others, I think you
can
choose what suits you best.

Cheers,
Spyros


On 28 June 2016 at 16:44, Julien Danjou  wrote:

> On Tue, Jun 28 2016, Ildikó Váncsa wrote:
>
> > I have a third less urgent question. The install-guide has it's own
> folder at
> > the same level where these two projects have their 'doc' folder. I would
> assume
> > other projects have the same or similar folder for the developer docs.
> Would
> > that be reasonable/possible to have one main 'doc' folder for all the
> docs?
>
> This is our long-term objective for Telemetry projects.
>
> Gnocchi already have only one doc/ folder with all the documentation,
> From installation to usage.
>
> I don't think our projects are not big enough and have enough resources
> to start maintaining different documentations with different scopes,
> etc.
>
> --
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Need helps to implement the full baremetals support

2016-06-20 Thread Spyros Trigazis
Hi Yuanying.

On 21 June 2016 at 08:02, Yuanying OTSUKA  wrote:

> Hi, Spyros
>
> Thanks for testing it.
> Maybe you see that there are some problems to support baremetal.
> We should add functional test to our jenkins job
> because this template will be broken easily if anyone adds some logic to
> our templates/codes.
> But following problems will block us.
>
> 1. How to get Fedora 23 image which includes k8s?
>

* The image that Ton uploaded isn't good enough? He did it by following
your instructions.
* Can we use atomic for baremetal?


> 2. How to solve Ironic instance_info problem?
>

I'll look into this in more detail.


>
> Currently I have no idea, maybe time will solve these?
>
>
> Thanks
> -yuanying
>

Thanks,
Spyros


>
> 2016年6月21日(火) 0:30 Spyros Trigazis :
>
>> Hi Yuanying,
>>
>> I tested your patch [2] with the image that Ton created [1] and it worked
>> .
>>
>> For me devicemapper as docker-storage-driver didn't work but this is
>> unrelated to this patch, I'll update devicemapper. I used overlay and
>> it was ok.
>>
>> I'll sum up what I did here, for others to test.
>>
>> On a fresh install of Ubuntu 14.04.3
>>
>> 0-
>> setup environment as in:
>>
>> http://docs.openstack.org/developer/magnum/dev/dev-quickstart.html#dev-quickstart
>>
>> 1-
>> I used a local.conf with less configuration and I added magnum.
>> https://stikked.web.cern.ch/stikked/view/35816b1d
>>
>> 2-
>> Update subnets with dns-nameserver
>> neutron subnet-update private-subnet --dns-nameserver 8.8.8.8
>> neutron subnet-update public-subnet --dns-nameserver 8.8.8.8
>>
>> 3-
>> Modify ironic.nodes table
>> alter table ironic.nodes modify instance_info LONGTEXT;
>>
>> 4-
>> download images [1] and register as in:
>>
>> https://review.openstack.org/#/c/320968/10/magnum/elements/kubernetes/README.md
>>
>> 5-
>> update iptables as in our devstack script:
>> https://github.com/openstack/magnum/blob/master/devstack/lib/magnum#L326
>>
>> 6-
>> magnum baymodel-create --name k8s-ironic-baymodel \
>>--keypair-id testkey \
>>--server-type bm \
>>--external-network-id public \
>>--fixed-network private \
>>--image-id fedora-k8s \
>>--flavor-id baremetal \
>>--network-driver flannel \
>>--dns 8.8.8.8 \
>>--coe kubernetes \
>>--docker-storage-driver overlay
>>
>> 7-
>> create bay
>> magnum bay-create --name k8s-ironbay --baymodel k8s-ironic-baymodel 
>> --node-count
>> 1
>>
>> It took a few minutes to get CREATE_COMPLETE on my 4-core desktop.
>>
>> Thanks Yuanying and Ton!
>>
>> Cheers,
>> Spyros
>>
>>
>> [1] https://fedorapeople.org/groups/magnum/fedora-23-kubernetes*
>> [2] https://review.openstack.org/#/c/320968/
>>
>>
>> On 14 June 2016 at 03:26, Yuanying OTSUKA  wrote:
>>
>>> Hi, Spyros
>>>
>>> I updated ironic heat template, and succeeded booting k8s bay with
>>> Ironic.
>>> Could you test it?
>>>
>>> Unfortunately there are some problem and requirement to test.
>>> I describe below.
>>>
>>> * subnet which belongs to private network should be set up with
>>> dns_nameservers like following.
>>>
>>> $ neutron subnet-update private-subnet —dns-nameserver 8.8.8.8
>>>
>>> * modify ironic.nodes table
>>>
>>> $ alter table ironic.nodes modify instance_info LONGTEXT;
>>>
>>> * baymodel
>>>
>>> $ magnum baymodel-create —name kubernetes —keypair-id default \
>>>--server-type bm \
>>>--external-network-id public \
>>>--fixed-network private \
>>>--image-id fedora-k8s \
>>>--flavor-id baremetal \
>>>    --network-driver flannel \
>>>--coe kubernetes
>>>
>>> * Fedora image
>>> Following procedure depends on diskimage-builder fix:
>>> https://review.openstack.org/#/c/247296/
>>>
>>> https://review.openstack.org/#/c/320968/10/magnum/elements/kubernetes/README.md
>>>
>>> * my local.con

Re: [openstack-dev] [magnum] Need helps to implement the full baremetals support

2016-06-20 Thread Spyros Trigazis
Hi Yuanying,

I tested your patch [2] with the image that Ton created [1] and it worked.

For me devicemapper as docker-storage-driver didn't work but this is
unrelated to this patch, I'll update devicemapper. I used overlay and
it was ok.

I'll sum up what I did here, for others to test.

On a fresh install of Ubuntu 14.04.3

0-
setup environment as in:
http://docs.openstack.org/developer/magnum/dev/dev-quickstart.html#dev-quickstart

1-
I used a local.conf with less configuration and I added magnum.
https://stikked.web.cern.ch/stikked/view/35816b1d

2-
Update subnets with dns-nameserver
neutron subnet-update private-subnet --dns-nameserver 8.8.8.8
neutron subnet-update public-subnet --dns-nameserver 8.8.8.8

3-
Modify ironic.nodes table
alter table ironic.nodes modify instance_info LONGTEXT;

4-
download images [1] and register as in:
https://review.openstack.org/#/c/320968/10/magnum/elements/kubernetes/README.md

5-
update iptables as in our devstack script:
https://github.com/openstack/magnum/blob/master/devstack/lib/magnum#L326

6-
magnum baymodel-create --name k8s-ironic-baymodel \
   --keypair-id testkey \
   --server-type bm \
   --external-network-id public \
   --fixed-network private \
   --image-id fedora-k8s \
   --flavor-id baremetal \
   --network-driver flannel \
   --dns 8.8.8.8 \
   --coe kubernetes \
   --docker-storage-driver overlay

7-
create bay
magnum bay-create --name k8s-ironbay --baymodel k8s-ironic-baymodel
--node-count
1

It took a few minutes to get CREATE_COMPLETE on my 4-core desktop.

Thanks Yuanying and Ton!

Cheers,
Spyros


[1] https://fedorapeople.org/groups/magnum/fedora-23-kubernetes*
[2] https://review.openstack.org/#/c/320968/


On 14 June 2016 at 03:26, Yuanying OTSUKA  wrote:

> Hi, Spyros
>
> I updated ironic heat template, and succeeded booting k8s bay with Ironic.
> Could you test it?
>
> Unfortunately there are some problem and requirement to test.
> I describe below.
>
> * subnet which belongs to private network should be set up with
> dns_nameservers like following.
>
> $ neutron subnet-update private-subnet —dns-nameserver 8.8.8.8
>
> * modify ironic.nodes table
>
> $ alter table ironic.nodes modify instance_info LONGTEXT;
>
> * baymodel
>
> $ magnum baymodel-create —name kubernetes —keypair-id default \
>--server-type bm \
>--external-network-id public \
>--fixed-network private \
>--image-id fedora-k8s \
>--flavor-id baremetal \
>--network-driver flannel \
>--coe kubernetes
>
> * Fedora image
> Following procedure depends on diskimage-builder fix:
> https://review.openstack.org/#/c/247296/
>
> https://review.openstack.org/#/c/320968/10/magnum/elements/kubernetes/README.md
>
> * my local.conf to setup ironic env
> http://paste.openstack.org/show/515877/
>
>
> Thanks
> -yuanying
>
>
> 2016年5月25日(水) 22:00 Yuanying OTSUKA :
>
>> Hi, Spyros
>>
>> I fixed a conflicts and upload following patch.
>> * https://review.openstack.org/#/c/320968/
>>
>> But it isn’t tested yet, maybe it doesn’t work..
>> If you have a question, please feel free to ask.
>>
>>
>> Thanks
>> -yuanying
>>
>>
>>
>> 2016年5月25日(水) 17:56 Spyros Trigazis :
>>
>>> Hi Yuanying,
>>>
>>> please upload your workaround. I can test it and try to fix the
>>> conflicts.
>>> Even if it conflicts we can have some iterations on it.
>>>
>>> I'll upload later what worked for me on devstack.
>>>
>>> Thanks,
>>> Spyros
>>>
>>> On 25 May 2016 at 05:13, Yuanying OTSUKA  wrote:
>>>
>>>> Hi, Hongbin, Spyros.
>>>>
>>>> I’m also interesting this work.
>>>> I have workaround patch to support ironic.
>>>> (but currently conflict with master.
>>>> Is it helpful to upload it for initial step of the implementation?
>>>>
>>>> Thanks
>>>> -yuanying
>>>>
>>>> 2016年5月25日(水) 6:52 Hongbin Lu :
>>>>
>>>>> Hi all,
>>>>>
>>>>>
>>>>>
>>>>> One of the most important feature that Magnum team wants to deliver in
>>>>> Newton is the full baremetal support. There is a blueprint [1] created for
>>>>> that and the blueprint was marked as “essential” (that is the

Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-06-13 Thread Spyros Trigazis
Hi Gary.

On 13 June 2016 at 09:06, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) <
li-gong.d...@hpe.com> wrote:

> Hi Tom/All,
>
> >6. Ironic Integration:
> https://etherpad.openstack.org/p/newton-magnum-ironic-integration
> >- Start the implementation immediately
> >- Prefer quick work-around for identified issues (cinder volume
> attachment, variation of number of ports, etc.)
>
> >We need to implement a bay template that can use a flat networking model
> as this is the only networking model Ironic currently supports.
> Multi-tenant networking is imminent. This should be done before work on an
> Ironic template starts.
>
> We have already implemented a bay template that uses a flat networking
> model and other python code(making magnum to find the correct heat
> template) which is used in our own project.
> What do you think of this feature? If you think it is necessary for
> Magnum, I can contribute this codes to Magnum upstream.
>

This feature is useful to magnum and there is a blueprint for that:
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips
You can add some notes on the whiteboard about your proposed change.

As for the ironic integration, we should modify the existing templates,
there
is work in progress on that: https://review.openstack.org/#/c/320968/

btw, you added new yaml files or you modified the existing ones kubemaster,
minion and cluster?

Cheers,
Spyros


>
> Regards,
> Gary Duan
>
>
> -Original Message-
> From: Cammann, Tom
> Sent: Tuesday, May 03, 2016 1:12 AM
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [magnum] Notes for Magnum design summit
>
> Thanks for the write up Hongbin and thanks to all those who contributed to
> the design summit. A few comments on the summaries below.
>
> 6. Ironic Integration:
> https://etherpad.openstack.org/p/newton-magnum-ironic-integration
> - Start the implementation immediately
> - Prefer quick work-around for identified issues (cinder volume
> attachment, variation of number of ports, etc.)
>
> We need to implement a bay template that can use a flat networking model
> as this is the only networking model Ironic currently supports.
> Multi-tenant networking is imminent. This should be done before work on an
> Ironic template starts.
>
> 7. Magnum adoption challenges:
> https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
> - The challenges is listed in the etherpad above
>
> Ideally we need to turn this list into a set of actions which we can
> implement over the cycle, i.e. create a BP to remove requirement for LBaaS.
>
> 9. Magnum Heat template version:
> https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
> - In each bay driver, version the template and template definition.
> - Bump template version for minor changes, and bump bay driver version for
> major changes.
>
> We decided only bay driver versioning was required. The template and
> template driver does not need versioning due to the fact we can get heat to
> pass back the template which it used to create the bay.
>
> 10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
> - Add support for sending notifications to Ceilometer
> - Revisit bay monitoring and self-healing later
> - Container monitoring should not be done by Magnum, but it can be done by
> cAdvisor, Heapster, etc.
>
> We split this topic into 3 parts – bay telemetry, bay monitoring,
> container monitoring.
> Bay telemetry is done around actions such as bay/baymodel CRUD operations.
> This is implemented using using ceilometer notifications.
> Bay monitoring is around monitoring health of individual nodes in the bay
> cluster and we decided to postpone work as more investigation is required
> on what this should look like and what users actually need.
> Container monitoring focuses on what containers are running in the bay and
> general usage of the bay COE. We decided this will be done completed by
> Magnum by adding access to cAdvisor/heapster through baking access to
> cAdvisor by default.
>
> - Manually manage bay nodes (instead of being managed by Heat
> ResourceGroup): It can address the use case of heterogeneity of bay nodes
> (i.e. different availability zones, flavors), but need to elaborate the
> details further.
>
> The idea revolves around creating a heat stack for each node in the bay.
> This idea shows a lot of promise but needs more investigation and isn’t a
> current priority.
>
> Tom
>
>
> From: Hongbin Lu 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Saturday, 30 April 2016 at 05:05
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [magnum] Notes for Magnum design summit
>
> Hi team,
>
> For reference, below is a summary of the discussions/decisions in Austin
> design summit. P

Re: [openstack-dev] Quesion about Openstack Containers and Magnum

2016-06-10 Thread Spyros Trigazis
Hi Wally.

You can follow these instructions [1] to install from source
code (use the stable/mitaka branch when you'll clone).

Although, this guide is under the developer url, it's built for
operators. Keep in mind that you need Neutron/LBaaS V1.

Additionally there is this [2] puppet module.

Please share your experience.

Cheers,
Spyros

[1]
http://docs.openstack.org/developer/magnum/install-guide-from-source.html
[2] https://github.com/openstack/puppet-magnum

On 10 June 2016 at 06:26, zhihao wang  wrote:

> Dear Openstack Dev Members:
>
> I would like to install the Magnum on OpenStack to manage Docker
> Containers.
> I have a openstack Liberty production setup. one controller node, and a
> few compute nodes.
>
> I am wondering how can I install Openstack Magnum on OpenStack Liberty on
> distributed production environment (1 controller node and some compute
> nodes)?
>
> I know I can install Magnum using desstack, but I dont want the developer
> version,
>
> Is there a way/guide to install it on production environment?
>
> Thanks
> Wally
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-08 Thread Spyros Trigazis
Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu  wrote:

> Ricardo,
>
> Thanks for the offer. Would I know where is the exact location?
>
> Best regards,
> Hongbin
>
> > -Original Message-
> > From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> > Sent: June-08-16 5:43 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
> >
> > Hi Hongbin.
> >
> > Not sure how this fits everyone, but we would be happy to host it at
> > CERN. How do people feel about it? We can add a nice tour of the place
> > as a bonus :)
> >
> > Let us know.
> >
> > Ricardo
> >
> >
> >
> > On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> > wrote:
> > > Hi all,
> > >
> > >
> > >
> > > Please find the Doodle pool below for selecting the Magnum midcycle
> > date.
> > > Presumably, it will be a 2 days event. The location is undecided for
> > now.
> > > The previous midcycles were hosted in bay area so I guess we will
> > stay
> > > there at this time.
> > >
> > >
> > >
> > > http://doodle.com/poll/5tbcyc37yb7ckiec
> > >
> > >
> > >
> > > In addition, the Magnum team is finding a host for the midcycle.
> > > Please let us know if you interest to host us.
> > >
> > >
> > >
> > > Best regards,
> > >
> > > Hongbin
> > >
> > >
> > >
> > __
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] [install-guide] Install guide from source

2016-06-01 Thread Spyros Trigazis
This work aims to be published to magnum's developer page:
http://docs.openstack.org/developer/magnum/

Cheers,
Spyros


On 1 June 2016 at 17:30, Andreas Jaeger  wrote:

> On 06/01/2016 05:21 PM, Spyros Trigazis wrote:
>
>> Hi everyone,
>>
>> Is the idea of having an install-guide from source and possibly
>> virtualenvs still under consideration?
>>
>> I'd like to share with you what we are currently doing along with
>> the install-guide based on the cookiecutter template.
>>
>> I have created this change [1] in our project repo. Although some
>> commands are ugly it works in the same way on Ubuntu, Fedora,
>> Suse and Debian. Since this change aims Newton release, we clone
>> from master, when we branch will update to clone from the stable
>> branch.
>>
>> Cheers,
>> Spyros
>>
>> [1] https://review.openstack.org/#/c/319399/
>>
>
> We will not have a full from source guide - let's grow the existing one
> first before adding another variation ;). The idea was AFAIR that projects
> can install from source if there are no packages for them.
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] [install-guide] Install guide from source

2016-06-01 Thread Spyros Trigazis
Hi everyone,

Is the idea of having an install-guide from source and possibly
virtualenvs still under consideration?

I'd like to share with you what we are currently doing along with
the install-guide based on the cookiecutter template.

I have created this change [1] in our project repo. Although some
commands are ugly it works in the same way on Ubuntu, Fedora,
Suse and Debian. Since this change aims Newton release, we clone
from master, when we branch will update to clone from the stable
branch.

Cheers,
Spyros

[1] https://review.openstack.org/#/c/319399/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas] Operator-facing installation guide

2016-06-01 Thread Spyros Trigazis
Hi.

I have added https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun

Regards,
Spyros


On 1 June 2016 at 16:39, Hongbin Lu  wrote:

> Hi lbaas team,
>
>
>
> I wonder if there is an operator-facing installation guide for
> neutron-lbaas. I asked that because Magnum is working on an installation
> guide [1] and neutron-lbaas is a dependency of Magnum. We want to link to
> an official lbaas guide so that our users will have a completed
> instruction. Any pointer?
>
>
>
> [1] https://review.openstack.org/#/c/319399/
>
>
>
> Best regards,
>
> Hongbin
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Need helps to implement the full baremetals support

2016-05-25 Thread Spyros Trigazis
Hi Yuanying,

please upload your workaround. I can test it and try to fix the conflicts.
Even if it conflicts we can have some iterations on it.

I'll upload later what worked for me on devstack.

Thanks,
Spyros

On 25 May 2016 at 05:13, Yuanying OTSUKA  wrote:

> Hi, Hongbin, Spyros.
>
> I’m also interesting this work.
> I have workaround patch to support ironic.
> (but currently conflict with master.
> Is it helpful to upload it for initial step of the implementation?
>
> Thanks
> -yuanying
>
> 2016年5月25日(水) 6:52 Hongbin Lu :
>
>> Hi all,
>>
>>
>>
>> One of the most important feature that Magnum team wants to deliver in
>> Newton is the full baremetal support. There is a blueprint [1] created for
>> that and the blueprint was marked as “essential” (that is the highest
>> priority). Spyros is the owner of the blueprint and he is looking for helps
>> from other contributors. For now, we immediately needs help to fix the
>> existing Ironic templates [2][3][4] that are used to provision a Kubernetes
>> cluster on top of baremetal instances. These templates were used to work,
>> but they become outdated right now. We need help to fix those Heat template
>> as an initial step of the implementation. Contributors are expected to
>> follow the Ironic devstack guide to setup the environment. Then, exercise
>> those templates in Heat.
>>
>>
>>
>> If you interest to take the work, please contact Spyros or me and we will
>> coordinate the efforts.
>>
>>
>>
>> [1]
>> https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support
>>
>> [2]
>> https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubecluster-fedora-ironic.yaml
>>
>> [3]
>> https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubemaster-fedora-ironic.yaml
>>
>> [4]
>> https://github.com/openstack/magnum/blob/master/magnum/templates/kubernetes/kubeminion-fedora-ironic.yaml
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Docs] Enhance Docs Landing Page Descriptive Text

2016-05-14 Thread Spyros Trigazis
Hi.

Short answer: Very good idea, I'd like to see the spec.

Slightly longer:
Moving this information to the docs landing page, it will increase the
visibility
of Big Tent projects and it's a way for operators to know what's available.
It
partially addresses our concern about treating Big Tent projects as
OpenStack
first-class citizens in the install-guide. Finally, an operator who want's
to install
a Big Tent project, should be informed about the project's prerequisites as
early
as possible.

Cheers,
Spyros Trigazis
(Member of Magnum)


On 14 May 2016 at 00:08, Laura Clymer  wrote:

> Hi everyone,
>
> In the current Ubuntu install guide, there is this section:
> http://docs.openstack.org/mitaka/install-guide-ubuntu/common/app_support.html
>
> It contains a good deal of description on the type of information
> contained in each of the release-level docs. This type of description is
> very helpful to new users in that it helps them understand where to look
> for information. Given the major re-design for the Install Guide coming up,
> I would like to propose that the text in this section is migrated (and
> perhaps enhanced) to the docs landing page.
>
> I am happy to write up a specification for the suggested text and submit
> it for further review, but I wanted to see if anyone else thinks this is a
> good idea?
>
> Thanks,
>
> Laura Clymer
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Need a volunteer for documentation liaisons

2016-05-11 Thread Spyros Trigazis
Hi,

since I work on the install docs (btw, I'll push in our repo in the next
hour), I could do it.

Spyros

On 11 May 2016 at 00:35, Anthony Chow  wrote:

> HongBin,
>
> What is the skill requirement or credential for this documentation liaison
> role?  I am interested in doing this
>
> Anthony.
>
> On Tue, May 10, 2016 at 3:24 PM, Hongbin Lu  wrote:
>
>> Hi team,
>>
>> We need a volunteer as liaison for documentation team. Just let me know
>> if you interest in this role.
>>
>> Best regards,
>> Hongbin
>>
>> > -Original Message-
>> > From: Lana Brindley [mailto:openst...@lanabrindley.com]
>> > Sent: May-10-16 5:47 PM
>> > To: OpenStack Development Mailing List; enstack.org
>> > Subject: [openstack-dev] [PTL][docs]Update your cross-project liaison!
>> >
>> > Hi everyone,
>> >
>> > OpenStack use cross project liaisons to ensure that projects are
>> > talking to each effectively, and the docs CPLs are especially important
>> > to the documentation team to ensure we have accurate docs. Can all PTLs
>> > please take a moment to check (and update if necessary) their CPL
>> > listed here:
>> > https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
>> >
>> > Thanks a bunch!
>> >
>> > Lana
>> >
>> > --
>> > Lana Brindley
>> > Technical Writer
>> > Rackspace Cloud Builders Australia
>> > http://lanabrindley.com
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] duplicate scripts in different coes

2016-04-19 Thread Spyros Trigazis
I made a first attempt to factor out the configuration of different
storage drivers in this change [1].

Spyros

[1] https://review.openstack.org/#/c/284720/

On 19 April 2016 at 05:45, Eli Qiao  wrote:

> Sure that is the things I want to cleanup long time before.
> I listed on
> https://etherpad.openstack.org/p/magnum-newton-design-summit-topics (item
> 12)
> and bug https://bugs.launchpad.net/magnum/+bug/1517218
>
>
>
> On 2016年04月19日 10:40, 王华 wrote:
>
>> Hi all,
>>
>> There are some duplicate scripts in different coes now, for example
>> scripts for tls and etcd. I think we should put them into a common function
>> module. If there is some minor difference between the scripts in different
>> coes, we can pass different parameters to these scripts.
>>
>
> --
> Best Regards, Eli Qiao (乔立勇)
> Intel OTC China
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-04-01 Thread Spyros Trigazis
+1

I'm a new contributor, but Eli made already a
good impression on me.

Cheers,
Spyros

On 1 April 2016 at 10:51, Cammann, Tom  wrote:

> +1
>
> From: Hongbin Lu 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, 31 March 2016 at 19:18
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core
> reviewer team
>
> Hi all,
>
> Eli Qiao has been consistently contributed to Magnum for a while. His
> contribution started from about 10 months ago. Along the way, he
> implemented several important blueprints and fixed a lot of bugs. His
> contribution covers various aspects (i.e. APIs, conductor, unit/functional
> tests, all the COE templates, etc.), which shows that he has a good
> understanding of almost every pieces of the system. The feature set he
> contributed to is proven to be beneficial to the project. For example, the
> gate testing framework he heavily contributed to is what we rely on every
> days. His code reviews are also consistent and useful.
>
> I am happy to propose Eli Qiao to be a core reviewer of Magnum team.
> According to the OpenStack Governance process [1], we require a minimum of
> 4 +1 votes within a 1 week voting window (consider this proposal as a +1
> vote from me). A vote of -1 is a veto. If we cannot get enough votes or
> there is a veto vote prior to the end of the voting window, Eli is not able
> to join the core team and needs to wait 30 days to reapply.
>
> The voting is open until Thursday April 7st.
>
> [1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>
> Best regards,
> Hongbin
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev