Re: [openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Magnum queens, uses kubernetes 1.9.3 by default.
You can upgrade to v1.10.11-1. From a quick test
v1.11.5-1 is also compatible with 1.9.x.

We are working to make this painless, sorry you
have to ssh to the nodes for now.

Cheers,
Spyros

On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis  wrote:

> Hello all,
>
> Following the vulnerability [0], with magnum rocky and the kubernetes
> driver
> on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
> upgrade
> the apiserver in existing clusters, on the master node(s) you can run:
> sudo atomic pull --storage ostree
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
> sudo atomic containers update --rebase
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver
>
> You can upgrade the other k8s components with similar commands.
>
> I'll share instructions for magnum queens tomorrow morning CET time.
>
> Cheers,
> Spyros
>
> [0] https://github.com/kubernetes/kubernetes/issues/71411
> [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Hello all,

Following the vulnerability [0], with magnum rocky and the kubernetes driver
on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
upgrade
the apiserver in existing clusters, on the master node(s) you can run:
sudo atomic pull --storage ostree
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
sudo atomic containers update --rebase
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver

You can upgrade the other k8s components with similar commands.

I'll share instructions for magnum queens tomorrow morning CET time.

Cheers,
Spyros

[0] https://github.com/kubernetes/kubernetes/issues/71411
[1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed

2018-11-29 Thread Vikrant Aggarwal
Hi Feilong,

Thanks for your reply.

Kindly find the below outputs.

[root@packstack1 ~]# rpm -qa | grep -i magnum
python-magnum-7.0.1-1.el7.noarch
openstack-magnum-conductor-7.0.1-1.el7.noarch
openstack-magnum-ui-5.0.1-1.el7.noarch
openstack-magnum-api-7.0.1-1.el7.noarch
puppet-magnum-13.3.1-1.el7.noarch
python2-magnumclient-2.10.0-1.el7.noarch
openstack-magnum-common-7.0.1-1.el7.noarch

[root@packstack1 ~]# rpm -qa | grep -i heat
openstack-heat-ui-1.4.0-1.el7.noarch
openstack-heat-api-cfn-11.0.0-1.el7.noarch
openstack-heat-engine-11.0.0-1.el7.noarch
puppet-heat-13.3.1-1.el7.noarch
python2-heatclient-1.16.1-1.el7.noarch
openstack-heat-api-11.0.0-1.el7.noarch
openstack-heat-common-11.0.0-1.el7.noarch

Thanks & Regards,
Vikrant Aggarwal


On Fri, Nov 30, 2018 at 2:44 AM Feilong Wang 
wrote:

> Hi Vikrant,
>
> Before we dig more, it would be nice if you can let us know the version of
> your Magnum and Heat. Cheers.
>
>
> On 30/11/18 12:12 AM, Vikrant Aggarwal wrote:
>
> Hello Team,
>
> Trying to deploy on K8 on fedora atomic.
>
> Here is the output of cluster template:
> ~~~
> [root@packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum
> cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57
> WARNING: The magnum client is deprecated and will be removed in a future
> release.
> Use the OpenStack client to avoid seeing this message.
> +---+--+
> | Property  | Value|
> +---+--+
> | insecure_registry | -|
> | labels| {}   |
> | updated_at| -|
> | floating_ip_enabled   | True |
> | fixed_subnet  | -|
> | master_flavor_id  | -|
> | user_id   | 203617849df9490084dde1897b28eb53 |
> | uuid  | 16eb91f7-18fe-4ce3-98db-c732603f2e57 |
> | no_proxy  | -|
> | https_proxy   | -|
> | tls_disabled  | False|
> | keypair_id| kubernetes   |
> | project_id| 45a6706c831c42d5bf2da928573382b1 |
> | public| False|
> | http_proxy| -|
> | docker_volume_size| 10   |
> | server_type   | vm   |
> | external_network_id   | external1|
> | cluster_distro| fedora-atomic|
> | image_id  | f5954340-f042-4de3-819e-a3b359591770 |
> | volume_driver | -|
> | registry_enabled  | False|
> | docker_storage_driver | devicemapper |
> | apiserver_port| -|
> | name  | coe-k8s-template |
> | created_at| 2018-11-28T12:58:21+00:00|
> | network_driver| flannel  |
> | fixed_network | -|
> | coe   | kubernetes   |
> | flavor_id | m1.small |
> | master_lb_enabled | False|
> | dns_nameserver| 8.8.8.8  |
> +---+--+
> ~~~
> Found couple of issues in the logs of VM started by magnum.
>
> - etcd was not getting started because of incorrect permission on file
> "/etc/etcd/certs/server.key". This file is owned by root by default have
> 0440 as permission. Changed the permission to 0444 so that etcd can read
> the file. After that etcd started successfully.
>
> - etcd DB doesn't contain anything:
>
> [root@kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r
> [root@kube-cluster1-qobaagdob75g-master-0 ~]#
>
> - Flanneld is stuck in activating status.
> ~~~
> [root@kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld
> ● flanneld.service - Flanneld overlay address etcd agent
>Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled;
> vendor preset: disabled)
>Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago
>  Main PID: 6491 (flanneld)
> Tasks: 6 (limit: 4915)
>Memory: 4.7M
>   CPU: 53ms
>CGroup: /system.slice/flanneld.service
>└─6491 /usr/bin/flanneld -etcd-endpoints=http://127.0.0.1:2379
> -etcd-prefix=/atomic.io/network
>
> Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:44.5693766491 

Re: [openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed

2018-11-29 Thread Feilong Wang
Hi Vikrant,

Before we dig more, it would be nice if you can let us know the version
of your Magnum and Heat. Cheers.


On 30/11/18 12:12 AM, Vikrant Aggarwal wrote:
> Hello Team,
>
> Trying to deploy on K8 on fedora atomic.
>
> Here is the output of cluster template:
> ~~~
> [root@packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum
> cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57
> WARNING: The magnum client is deprecated and will be removed in a
> future release.
> Use the OpenStack client to avoid seeing this message.
> +---+--+
> | Property  | Value    |
> +---+--+
> | insecure_registry | -    |
> | labels    | {}   |
> | updated_at    | -    |
> | floating_ip_enabled   | True |
> | fixed_subnet  | -    |
> | master_flavor_id  | -    |
> | user_id   | 203617849df9490084dde1897b28eb53 |
> | uuid  | 16eb91f7-18fe-4ce3-98db-c732603f2e57 |
> | no_proxy  | -    |
> | https_proxy   | -    |
> | tls_disabled  | False    |
> | keypair_id    | kubernetes   |
> | project_id    | 45a6706c831c42d5bf2da928573382b1 |
> | public    | False    |
> | http_proxy    | -    |
> | docker_volume_size    | 10   |
> | server_type   | vm   |
> | external_network_id   | external1    |
> | cluster_distro    | fedora-atomic    |
> | image_id  | f5954340-f042-4de3-819e-a3b359591770 |
> | volume_driver | -    |
> | registry_enabled  | False    |
> | docker_storage_driver | devicemapper |
> | apiserver_port    | -    |
> | name  | coe-k8s-template |
> | created_at    | 2018-11-28T12:58:21+00:00    |
> | network_driver    | flannel  |
> | fixed_network | -    |
> | coe   | kubernetes   |
> | flavor_id | m1.small |
> | master_lb_enabled | False    |
> | dns_nameserver    | 8.8.8.8  |
> +---+--+
> ~~~
> Found couple of issues in the logs of VM started by magnum.
>
> - etcd was not getting started because of incorrect permission on file
> "/etc/etcd/certs/server.key". This file is owned by root by default
> have 0440 as permission. Changed the permission to 0444 so that etcd
> can read the file. After that etcd started successfully.
>
> - etcd DB doesn't contain anything:
>
> [root@kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r
> [root@kube-cluster1-qobaagdob75g-master-0 ~]#
>
> - Flanneld is stuck in activating status.
> ~~~
> [root@kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld
> ● flanneld.service - Flanneld overlay address etcd agent
>    Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled;
> vendor preset: disabled)
>    Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago
>  Main PID: 6491 (flanneld)
>     Tasks: 6 (limit: 4915)
>    Memory: 4.7M
>   CPU: 53ms
>    CGroup: /system.slice/flanneld.service
>    └─6491 /usr/bin/flanneld
> -etcd-endpoints=http://127.0.0.1:2379 -etcd-prefix=/atomic.io/network
> 
>
> Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:44.569376    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> ) [3]
> Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:45.584532    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> ) [3]
> Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:46.646255    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> ) [3]
> Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal
> flanneld[6491]: E1129 11:05:47.673062    6491 network.go:102] failed
> to retrieve network config: 100: Key not found (/atomic.io
> ) [3]
> Nov 29 11:05:48 

[openstack-dev] [magnum] [Rocky] K8 deployment on fedora-atomic is failed

2018-11-29 Thread Vikrant Aggarwal
Hello Team,

Trying to deploy on K8 on fedora atomic.

Here is the output of cluster template:
~~~
[root@packstack1 k8s_fedora_atomic_v1(keystone_admin)]# magnum
cluster-template-show 16eb91f7-18fe-4ce3-98db-c732603f2e57
WARNING: The magnum client is deprecated and will be removed in a future
release.
Use the OpenStack client to avoid seeing this message.
+---+--+
| Property  | Value|
+---+--+
| insecure_registry | -|
| labels| {}   |
| updated_at| -|
| floating_ip_enabled   | True |
| fixed_subnet  | -|
| master_flavor_id  | -|
| user_id   | 203617849df9490084dde1897b28eb53 |
| uuid  | 16eb91f7-18fe-4ce3-98db-c732603f2e57 |
| no_proxy  | -|
| https_proxy   | -|
| tls_disabled  | False|
| keypair_id| kubernetes   |
| project_id| 45a6706c831c42d5bf2da928573382b1 |
| public| False|
| http_proxy| -|
| docker_volume_size| 10   |
| server_type   | vm   |
| external_network_id   | external1|
| cluster_distro| fedora-atomic|
| image_id  | f5954340-f042-4de3-819e-a3b359591770 |
| volume_driver | -|
| registry_enabled  | False|
| docker_storage_driver | devicemapper |
| apiserver_port| -|
| name  | coe-k8s-template |
| created_at| 2018-11-28T12:58:21+00:00|
| network_driver| flannel  |
| fixed_network | -|
| coe   | kubernetes   |
| flavor_id | m1.small |
| master_lb_enabled | False|
| dns_nameserver| 8.8.8.8  |
+---+--+
~~~
Found couple of issues in the logs of VM started by magnum.

- etcd was not getting started because of incorrect permission on file
"/etc/etcd/certs/server.key". This file is owned by root by default have
0440 as permission. Changed the permission to 0444 so that etcd can read
the file. After that etcd started successfully.

- etcd DB doesn't contain anything:

[root@kube-cluster1-qobaagdob75g-master-0 ~]# etcdctl ls / -r
[root@kube-cluster1-qobaagdob75g-master-0 ~]#

- Flanneld is stuck in activating status.
~~~
[root@kube-cluster1-qobaagdob75g-master-0 ~]# systemctl status flanneld
● flanneld.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled;
vendor preset: disabled)
   Active: activating (start) since Thu 2018-11-29 11:05:39 UTC; 14s ago
 Main PID: 6491 (flanneld)
Tasks: 6 (limit: 4915)
   Memory: 4.7M
  CPU: 53ms
   CGroup: /system.slice/flanneld.service
   └─6491 /usr/bin/flanneld -etcd-endpoints=http://127.0.0.1:2379
-etcd-prefix=/atomic.io/network

Nov 29 11:05:44 kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:44.5693766491 network.go:102] failed to
retrieve network config: 100: Key not found (/atomic.io) [3]
Nov 29 11:05:45 kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:45.5845326491 network.go:102] failed to
retrieve network config: 100: Key not found (/atomic.io) [3]
Nov 29 11:05:46 kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:46.6462556491 network.go:102] failed to
retrieve network config: 100: Key not found (/atomic.io) [3]
Nov 29 11:05:47 kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:47.6730626491 network.go:102] failed to
retrieve network config: 100: Key not found (/atomic.io) [3]
Nov 29 11:05:48 kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:48.6869196491 network.go:102] failed to
retrieve network config: 100: Key not found (/atomic.io) [3]
Nov 29 11:05:49 kube-cluster1-qobaagdob75g-master-0.novalocal
flanneld[6491]: E1129 11:05:49.7091366491 network.go:102] failed to
retrieve network config: 100: Key not found (/atomic.io) [3]
Nov 29 11:05:50 kube-cluster1-qobaagdob75g-master-0.novalocal

[openstack-dev] [magnum][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Magnum team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Magnum would fall under the 'Plays Well With Others' design goal, as 
it's one way of integrating OpenStack with Kubernetes, ensuring that 
OpenStack users have access to container orchestration tools. And it's 
also an example (along with Sahara and Trove) of the 'Abstract 
Specialised Operations' goal, since it allows operators to have a 
centralised team of Kubernetes cluster operators to serve multiple tenants.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Upcoming meeting 2018-09-11 Tuesday UTC 2100

2018-09-11 Thread Spyros Trigazis
Hello team,

This is a reminder for the upcoming magnum meeting [0].

For convenience you can import this from here [1] or view it in html here
[2].

Cheers,
Spyros

[0]
https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Magnum_Team_Meeting
[1]
https://calendar.google.com/calendar/ical/dl8ufmpm2ahi084d038o7rgoek%40group.calendar.google.com/public/basic.ics
[2]
https://calendar.google.com/calendar/embed?src=dl8ufmpm2ahi084d038o7rgoek%40group.calendar.google.com=Europe/Zurich
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin

Now with Fedora 26 I have etcd available but etcd fails.

[root@swarm-u2rnie4d4ik6-master-0 ~]# /usr/bin/etcd 
--name="${ETCD_NAME}" --data-dir="${ETCD_DATA_DIR}" 
--listen-client-urls="${ETCD_LISTEN_CLIENT_URLS}" --debug
2018-08-23 14:34:15.596516 E | etcdmain: error verifying flags, 
--advertise-client-urls is required when --listen-client-urls is set 
explicitly. See 'etcd --help'.
2018-08-23 14:34:15.596611 E | etcdmain: When listening on specific 
address(es), this etcd process must advertise accessible url(s) to each 
connected client.


There is a issue where the --advertise-client-urls and TLS --cert-file 
and --key-file is not passed in the systemd file, changing this to:
/usr/bin/etcd --name="${ETCD_NAME}" --data-dir="${ETCD_DATA_DIR}" 
--listen-client-urls="${ETCD_LISTEN_CLIENT_URLS}" 
--advertise-client-urls="${ETCD_ADVERTISE_CLIENT_URLS}" 
--cert-file="${ETCD_PEER_CERT_FILE}" --key-file="${ETCD_PEER_KEY_FILE}"


Makes it work, any thoughts?

Best regards
Tobias

On 08/23/2018 03:54 PM, Tobias Urdin wrote:
Found the issue, I assume I have to use Fedora Atomic 26 until Rocky 
where I can start using Fedora Atomic 27.

Will Fedora Atomia 28 be supported for Rocky?

https://bugs.launchpad.net/magnum/+bug/1735381 (Run etcd and flanneld 
in system containers, In Fedora Atomic 27 etcd and flanneld are 
removed from the base image.)
https://review.openstack.org/#/c/524116/ (Run etcd and flanneld in a 
system container)


Still wondering about the "The Parameter (nodes_affinity_policy) was 
not provided" when using Mesos + Ubuntu?


Best regards
Tobias

On 08/23/2018 02:56 PM, Tobias Urdin wrote:

Thanks for all of your help everyone,

I've been busy with other thing but was able to pick up where I left 
regarding Magnum.
After fixing some issues I have been able to provision a working 
Kubernetes cluster.


I'm still having issues with getting Docker Swarm working, I've tried 
with both Docker and flannel as the networking layer but
none of these works. After investigating the issue seems to be that 
etcd.service is not installed (unit file doesn't exist) so the master
doesn't work, the minion swarm node is provisioned but cannot join 
the cluster because there is no etcd.


Anybody seen this issue before? I've been digging through all 
cloud-init logs and cannot see anything that would cause this.


I also have another separate issue, when provisioning using the 
magnum-ui in Horizon and selecting ubuntu with Mesos I get the error
"The Parameter (nodes_affinity_policy) was not provided". The 
nodes_affinity_policy do have a default value in magnum.conf so I'm 
starting

to think this might be an issue with the magnum-ui dashboard?

Best regards
Tobias

On 08/04/2018 06:24 PM, Joe Topjian wrote:
We recently deployed Magnum and I've been making my way through 
getting both Swarm and Kubernetes running. I also ran into some 
initial issues. These notes may or may not help, but thought I'd 
share them in case:


* We're using Barbican for SSL. I have not tried with the internal 
x509keypair.


* I was only able to get things running with Fedora Atomic 27, 
specifically the version used in the Magnum docs: 
https://docs.openstack.org/magnum/latest/install/launch-instance.html


Anything beyond that wouldn't even boot in my cloud. I haven't dug 
into this.


* Kubernetes requires a Cluster Template to have a label of 
cert_manager_api=true set in order for the cluster to fully come up 
(at least, it didn't work for me until I set this).


As far as troubleshooting methods go, check the cloud-init logs on 
the individual instances to see if any of the "parts" have failed to 
run. Manually re-run the parts on the command-line to get a better 
idea of why they failed. Review the actual script, figure out the 
variable interpolation and how it relates to the Cluster Template 
being used.


Eventually I was able to get clusters running with the stock 
driver/templates, but wanted to tune them in order to better fit in 
our cloud, so I've "forked" them. This is in no way a slight against 
the existing drivers/templates nor do I recommend doing this until 
you reach a point where the stock drivers won't meet your needs. But 
I mention it because it's possible to do and it's not terribly hard. 
This is still a work-in-progress and a bit hacky:


https://github.com/cybera/magnum-templates

Hope that helps,
Joe

On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin > wrote:


Hello,

I'm testing around with Magnum and have so far only had issues.
I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora
Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been
able to get it working.

Running Queens, is there any information about supported images?
Is Magnum maintained to support Fedora Atomic still?
What is in charge of population the certificates inside the
instances, because this seems to be the root of all issues, I'm
not using Barbican 

Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin
Found the issue, I assume I have to use Fedora Atomic 26 until Rocky 
where I can start using Fedora Atomic 27.

Will Fedora Atomia 28 be supported for Rocky?

https://bugs.launchpad.net/magnum/+bug/1735381 (Run etcd and flanneld in 
system containers, In Fedora Atomic 27 etcd and flanneld are removed 
from the base image.)
https://review.openstack.org/#/c/524116/ (Run etcd and flanneld in a 
system container)


Still wondering about the "The Parameter (nodes_affinity_policy) was not 
provided" when using Mesos + Ubuntu?


Best regards
Tobias

On 08/23/2018 02:56 PM, Tobias Urdin wrote:

Thanks for all of your help everyone,

I've been busy with other thing but was able to pick up where I left 
regarding Magnum.
After fixing some issues I have been able to provision a working 
Kubernetes cluster.


I'm still having issues with getting Docker Swarm working, I've tried 
with both Docker and flannel as the networking layer but
none of these works. After investigating the issue seems to be that 
etcd.service is not installed (unit file doesn't exist) so the master
doesn't work, the minion swarm node is provisioned but cannot join the 
cluster because there is no etcd.


Anybody seen this issue before? I've been digging through all 
cloud-init logs and cannot see anything that would cause this.


I also have another separate issue, when provisioning using the 
magnum-ui in Horizon and selecting ubuntu with Mesos I get the error
"The Parameter (nodes_affinity_policy) was not provided". The 
nodes_affinity_policy do have a default value in magnum.conf so I'm 
starting

to think this might be an issue with the magnum-ui dashboard?

Best regards
Tobias

On 08/04/2018 06:24 PM, Joe Topjian wrote:
We recently deployed Magnum and I've been making my way through 
getting both Swarm and Kubernetes running. I also ran into some 
initial issues. These notes may or may not help, but thought I'd 
share them in case:


* We're using Barbican for SSL. I have not tried with the internal 
x509keypair.


* I was only able to get things running with Fedora Atomic 27, 
specifically the version used in the Magnum docs: 
https://docs.openstack.org/magnum/latest/install/launch-instance.html


Anything beyond that wouldn't even boot in my cloud. I haven't dug 
into this.


* Kubernetes requires a Cluster Template to have a label of 
cert_manager_api=true set in order for the cluster to fully come up 
(at least, it didn't work for me until I set this).


As far as troubleshooting methods go, check the cloud-init logs on 
the individual instances to see if any of the "parts" have failed to 
run. Manually re-run the parts on the command-line to get a better 
idea of why they failed. Review the actual script, figure out the 
variable interpolation and how it relates to the Cluster Template 
being used.


Eventually I was able to get clusters running with the stock 
driver/templates, but wanted to tune them in order to better fit in 
our cloud, so I've "forked" them. This is in no way a slight against 
the existing drivers/templates nor do I recommend doing this until 
you reach a point where the stock drivers won't meet your needs. But 
I mention it because it's possible to do and it's not terribly hard. 
This is still a work-in-progress and a bit hacky:


https://github.com/cybera/magnum-templates

Hope that helps,
Joe

On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin > wrote:


Hello,

I'm testing around with Magnum and have so far only had issues.
I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora
Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been
able to get it working.

Running Queens, is there any information about supported images?
Is Magnum maintained to support Fedora Atomic still?
What is in charge of population the certificates inside the
instances, because this seems to be the root of all issues, I'm
not using Barbican but the x509keypair driver
is that the reason?

Perhaps I missed some documentation that x509keypair does not
support what I'm trying to do?

I've seen the following issues:

Docker:
* Master does not start and listen on TCP because of certificate
issues
dockerd-current[1909]: Could not load X509 key pair (cert:
"/etc/docker/server.crt", key: "/etc/docker/server.key")

* Node does not start with:
Dependency failed for Docker Application Container Engine.
docker.service: Job docker.service/start failed with result
'dependency'.

Kubernetes:
* Master etcd does not start because /run/etcd does not exist
** When that is created it fails to start because of certificate
2018-08-03 12:41:16.554257 C | etcdmain: open
/etc/etcd/certs/server.crt: no such file or directory

* Master kube-apiserver does not start because of certificate
unable to load server certificate: open
/etc/kubernetes/certs/server.crt: no such file or directory

* Master 

Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-23 Thread Tobias Urdin

Thanks for all of your help everyone,

I've been busy with other thing but was able to pick up where I left 
regarding Magnum.
After fixing some issues I have been able to provision a working 
Kubernetes cluster.


I'm still having issues with getting Docker Swarm working, I've tried 
with both Docker and flannel as the networking layer but
none of these works. After investigating the issue seems to be that 
etcd.service is not installed (unit file doesn't exist) so the master
doesn't work, the minion swarm node is provisioned but cannot join the 
cluster because there is no etcd.


Anybody seen this issue before? I've been digging through all cloud-init 
logs and cannot see anything that would cause this.


I also have another separate issue, when provisioning using the 
magnum-ui in Horizon and selecting ubuntu with Mesos I get the error
"The Parameter (nodes_affinity_policy) was not provided". The 
nodes_affinity_policy do have a default value in magnum.conf so I'm starting

to think this might be an issue with the magnum-ui dashboard?

Best regards
Tobias

On 08/04/2018 06:24 PM, Joe Topjian wrote:
We recently deployed Magnum and I've been making my way through 
getting both Swarm and Kubernetes running. I also ran into some 
initial issues. These notes may or may not help, but thought I'd share 
them in case:


* We're using Barbican for SSL. I have not tried with the internal 
x509keypair.


* I was only able to get things running with Fedora Atomic 27, 
specifically the version used in the Magnum docs: 
https://docs.openstack.org/magnum/latest/install/launch-instance.html


Anything beyond that wouldn't even boot in my cloud. I haven't dug 
into this.


* Kubernetes requires a Cluster Template to have a label of 
cert_manager_api=true set in order for the cluster to fully come up 
(at least, it didn't work for me until I set this).


As far as troubleshooting methods go, check the cloud-init logs on the 
individual instances to see if any of the "parts" have failed to run. 
Manually re-run the parts on the command-line to get a better idea of 
why they failed. Review the actual script, figure out the variable 
interpolation and how it relates to the Cluster Template being used.


Eventually I was able to get clusters running with the stock 
driver/templates, but wanted to tune them in order to better fit in 
our cloud, so I've "forked" them. This is in no way a slight against 
the existing drivers/templates nor do I recommend doing this until you 
reach a point where the stock drivers won't meet your needs. But I 
mention it because it's possible to do and it's not terribly hard. 
This is still a work-in-progress and a bit hacky:


https://github.com/cybera/magnum-templates

Hope that helps,
Joe

On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin > wrote:


Hello,

I'm testing around with Magnum and have so far only had issues.
I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora
Atomic 28) and Kubernetes (on Fedora Atomic 27) and haven't been
able to get it working.

Running Queens, is there any information about supported images?
Is Magnum maintained to support Fedora Atomic still?
What is in charge of population the certificates inside the
instances, because this seems to be the root of all issues, I'm
not using Barbican but the x509keypair driver
is that the reason?

Perhaps I missed some documentation that x509keypair does not
support what I'm trying to do?

I've seen the following issues:

Docker:
* Master does not start and listen on TCP because of certificate
issues
dockerd-current[1909]: Could not load X509 key pair (cert:
"/etc/docker/server.crt", key: "/etc/docker/server.key")

* Node does not start with:
Dependency failed for Docker Application Container Engine.
docker.service: Job docker.service/start failed with result
'dependency'.

Kubernetes:
* Master etcd does not start because /run/etcd does not exist
** When that is created it fails to start because of certificate
2018-08-03 12:41:16.554257 C | etcdmain: open
/etc/etcd/certs/server.crt: no such file or directory

* Master kube-apiserver does not start because of certificate
unable to load server certificate: open
/etc/kubernetes/certs/server.crt: no such file or directory

* Master heat script just sleeps forever waiting for port 8080 to
become available (kube-apiserver) so it can never kubectl apply
the final steps.

* Node does not even start and times out when Heat deploys it,
probably because master never finishes

Any help is appreciated perhaps I've missed something crucial,
I've not tested Kubernetes on CoreOS yet.

Best regards
Tobias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

[openstack-dev] [magnum] [magnum-ui] show certificate button bug requesting reviews

2018-08-23 Thread Tobias Urdin

Hello,

Requesting reviews from the magnum-ui core team for 
https://review.openstack.org/#/c/595245/
I'm hoping that we could make quick due of this and be able to backport 
it to the stable/rocky release, would be ideal to backport it for 
stable/queens as well.


Best regards
Tobias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] K8s Conformance Testing

2018-08-21 Thread Mohammed Naser
Hi Chris,

This is an awesome effort. We can provide nested virt resources which are 
leveraged by Kata at the moment. 

Thanks!
Mohammed

Sent from my iPhone

> On Aug 21, 2018, at 6:38 PM, Chris Hoge  wrote:
> 
> As discussed at the Vancouver SIG-K8s and Copenhagen SIG-OpenStack sessions,
> we're moving forward with obtaining Kubernetes Conformance certification for
> Magnum. While conformance test jobs aren't reliably running in the gate yet,
> the requirements of the program make sumbitting results manually on an
> infrequent basis something that we can work with while we wait for more
> stable nested virtualization resources. The OpenStack Foundation has signed
> the license agreement, and Feilong Wang is preparing an initial conformance
> run to submit for certification.
> 
> My thanks to the Magnum team for their impressive work on building out an
> API for deploying Kubernetes on OpenStack clusters.
> 
> [1] https://www.cncf.io/certification/software-conformance/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] K8s Conformance Testing

2018-08-21 Thread Chris Hoge
As discussed at the Vancouver SIG-K8s and Copenhagen SIG-OpenStack sessions,
we're moving forward with obtaining Kubernetes Conformance certification for
Magnum. While conformance test jobs aren't reliably running in the gate yet,
the requirements of the program make sumbitting results manually on an
infrequent basis something that we can work with while we wait for more
stable nested virtualization resources. The OpenStack Foundation has signed
the license agreement, and Feilong Wang is preparing an initial conformance
run to submit for certification.

My thanks to the Magnum team for their impressive work on building out an
API for deploying Kubernetes on OpenStack clusters.

[1] https://www.cncf.io/certification/software-conformance/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-04 Thread Joe Topjian
We recently deployed Magnum and I've been making my way through getting
both Swarm and Kubernetes running. I also ran into some initial issues.
These notes may or may not help, but thought I'd share them in case:

* We're using Barbican for SSL. I have not tried with the internal
x509keypair.

* I was only able to get things running with Fedora Atomic 27, specifically
the version used in the Magnum docs:
https://docs.openstack.org/magnum/latest/install/launch-instance.html

Anything beyond that wouldn't even boot in my cloud. I haven't dug into
this.

* Kubernetes requires a Cluster Template to have a label of
cert_manager_api=true set in order for the cluster to fully come up (at
least, it didn't work for me until I set this).

As far as troubleshooting methods go, check the cloud-init logs on the
individual instances to see if any of the "parts" have failed to run.
Manually re-run the parts on the command-line to get a better idea of why
they failed. Review the actual script, figure out the variable
interpolation and how it relates to the Cluster Template being used.

Eventually I was able to get clusters running with the stock
driver/templates, but wanted to tune them in order to better fit in our
cloud, so I've "forked" them. This is in no way a slight against the
existing drivers/templates nor do I recommend doing this until you reach a
point where the stock drivers won't meet your needs. But I mention it
because it's possible to do and it's not terribly hard. This is still a
work-in-progress and a bit hacky:

https://github.com/cybera/magnum-templates

Hope that helps,
Joe

On Fri, Aug 3, 2018 at 6:46 AM, Tobias Urdin  wrote:

> Hello,
>
> I'm testing around with Magnum and have so far only had issues.
> I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora Atomic 28)
> and Kubernetes (on Fedora Atomic 27) and haven't been able to get it
> working.
>
> Running Queens, is there any information about supported images? Is Magnum
> maintained to support Fedora Atomic still?
> What is in charge of population the certificates inside the instances,
> because this seems to be the root of all issues, I'm not using Barbican but
> the x509keypair driver
> is that the reason?
>
> Perhaps I missed some documentation that x509keypair does not support what
> I'm trying to do?
>
> I've seen the following issues:
>
> Docker:
> * Master does not start and listen on TCP because of certificate issues
> dockerd-current[1909]: Could not load X509 key pair (cert:
> "/etc/docker/server.crt", key: "/etc/docker/server.key")
>
> * Node does not start with:
> Dependency failed for Docker Application Container Engine.
> docker.service: Job docker.service/start failed with result 'dependency'.
>
> Kubernetes:
> * Master etcd does not start because /run/etcd does not exist
> ** When that is created it fails to start because of certificate
> 2018-08-03 12:41:16.554257 C | etcdmain: open /etc/etcd/certs/server.crt:
> no such file or directory
>
> * Master kube-apiserver does not start because of certificate
> unable to load server certificate: open /etc/kubernetes/certs/server.crt:
> no such file or directory
>
> * Master heat script just sleeps forever waiting for port 8080 to become
> available (kube-apiserver) so it can never kubectl apply the final steps.
>
> * Node does not even start and times out when Heat deploys it, probably
> because master never finishes
>
> Any help is appreciated perhaps I've missed something crucial, I've not
> tested Kubernetes on CoreOS yet.
>
> Best regards
> Tobias
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-03 Thread Bogdan Katynski

> On 3 Aug 2018, at 13:46, Tobias Urdin  wrote:
> 
> Kubernetes:
> * Master etcd does not start because /run/etcd does not exist

This could be an issue with etcd rpm. With Systemd, /run is an in-memory tmpfs 
and is wiped on reboots.

We’ve come across a similar issue in mariadb rpm on CentOS 7: 
https://bugzilla.redhat.com/show_bug.cgi?id=1538066

If the etcd rpm only creates /run/etcd during installation, that directory will 
not survive reboots. The rpm should also drop a file in 
/usr/lib/tmpfiles.d/etcd.conf with contents similar to

d /run/etcd 0755 etcd etcd - -


--
Bogdan Katyński
freenode: bodgix







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-03 Thread Tobias Urdin

Hello,

I'm testing around with Magnum and have so far only had issues.
I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora Atomic 
28) and Kubernetes (on Fedora Atomic 27) and haven't been able to get it 
working.


Running Queens, is there any information about supported images? Is 
Magnum maintained to support Fedora Atomic still?
What is in charge of population the certificates inside the instances, 
because this seems to be the root of all issues, I'm not using Barbican 
but the x509keypair driver

is that the reason?

Perhaps I missed some documentation that x509keypair does not support 
what I'm trying to do?


I've seen the following issues:

Docker:
* Master does not start and listen on TCP because of certificate issues
dockerd-current[1909]: Could not load X509 key pair (cert: 
"/etc/docker/server.crt", key: "/etc/docker/server.key")


* Node does not start with:
Dependency failed for Docker Application Container Engine.
docker.service: Job docker.service/start failed with result 'dependency'.

Kubernetes:
* Master etcd does not start because /run/etcd does not exist
** When that is created it fails to start because of certificate
2018-08-03 12:41:16.554257 C | etcdmain: open 
/etc/etcd/certs/server.crt: no such file or directory


* Master kube-apiserver does not start because of certificate
unable to load server certificate: open 
/etc/kubernetes/certs/server.crt: no such file or directory


* Master heat script just sleeps forever waiting for port 8080 to become 
available (kube-apiserver) so it can never kubectl apply the final steps.


* Node does not even start and times out when Heat deploys it, probably 
because master never finishes


Any help is appreciated perhaps I've missed something crucial, I've not 
tested Kubernetes on CoreOS yet.


Best regards
Tobias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] PTL Candidacy for Stein

2018-07-27 Thread T. Nichole Williams
+1, you’ve got my vote :D

T. Nichole Williams
tribe...@tribecc.us



> On Jul 27, 2018, at 6:35 AM, Spyros Trigazis  wrote:
> 
> Hello OpenStack community!
> 
> I would like to nominate myself as PTL for the Magnum project for the
> Stein cycle.
> 
> In the last cycle magnum became more stable and is reaching the point
> of becoming a feature complete solution for providing managed container
> clusters for private or public OpenStack clouds. Also during this cycle
> the community around the project became healthy and more sustainable.
> 
> My goals for Stein are to:
> - complete the work in cluster upgrades and cluster healing
> - keep up with the latest release of Kubernetes and Docker in stable
>   branches and improve their release process
> - documenation for cloud operators improvements
> - continue on building the community which supports the project
> 
> Thanks for your time,
> Spyros
> 
> strigazi on Freenode
> 
> [0] https://review.openstack.org/#/c/586516/ 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL Candidacy for Stein

2018-07-27 Thread Spyros Trigazis
Hello OpenStack community!

I would like to nominate myself as PTL for the Magnum project for the
Stein cycle.

In the last cycle magnum became more stable and is reaching the point
of becoming a feature complete solution for providing managed container
clusters for private or public OpenStack clouds. Also during this cycle
the community around the project became healthy and more sustainable.

My goals for Stein are to:
- complete the work in cluster upgrades and cluster healing
- keep up with the latest release of Kubernetes and Docker in stable
  branches and improve their release process
- documenation for cloud operators improvements
- continue on building the community which supports the project

Thanks for your time,
Spyros

strigazi on Freenode

[0] https://review.openstack.org/#/c/586516/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-07-24 Thread Spyros Trigazis
Hello list,

After trial and error this is the new layout of the magnum meetings plus
office hours.

1. The meeting moves to Tuesdays 2100 UTC starting today
2.1 Office hours for strigazi Tuesdays: 1300 to 1400 UTC
2.2 Office hours for flwang Wednesdays : 2200 to 2300 UTC

Cheers,
Spyros

[0] https://wiki.openstack.org/wiki/Meetings/Containers


On Tue, 26 Jun 2018 at 04:46, Fei Long Wang  wrote:

> Hi Spyros,
>
> Thanks for posting the discussion output. I'm not sure I can follow the
> idea of simplifying CNI configuration. Though we have both calico and
> flannel for k8s, but if we put both of them into single one config script.
> The script could be very complex. That's why I think we should define some
> naming and logging rules/policies for those scripts for long term
> maintenance to make our life easier. Thoughts?
>
> On 25/06/18 19:20, Spyros Trigazis wrote:
>
> Hello again,
>
> After Thursday's meeting I want to summarize what we discussed and add
> some pointers.
>
>
>- Work on using the out-of-tree cloud provider and move to the new
>model of defining it
>https://storyboard.openstack.org/#!/story/1762743
>https://review.openstack.org/#/c/577477/
>- Configure kubelet and kube-proxy on master nodes
>This story of the master node label can be extened
>https://storyboard.openstack.org/#!/story/2002618
>or we can add a new one
>- Simplify CNI configuration, we have calico and flannel. Ideally we
>should a single config script for each
>one. We could move flannel to the kubernetes hosted version that uses
>kubernetes objects for storage.
>(it is the recommended way by flannel and how it is done with kubeadm)
>- magum support in gophercloud
>https://github.com/gophercloud/gophercloud/issues/1003
>- *needs discussion *update version of heat templates (pike or queens)
>This need its own tread
>- Post deployment scripts for clusters, I have this since some time
>for my but doing it in
>heat is slightly (not a lot) complicated. Most magnum users favor  the
>simpler solution
>of passing a url of a manifest or script to the cluster (at least
>let's add sha512sum).
>- Simplify addition of custom labels/parameters. To avoid patcing
>magnum, it would be
>more ops friendly to have a generic field of custom parameters
>
> Not discussed in the last meeting but we should in the next ones:
>
>- Allow cluster scaling from different users in the same project
>https://storyboard.openstack.org/#!/story/2002648
>- Add the option to remove node from a resource group for swarm
>clusters like
>in kubernetes
>https://storyboard.openstack.org/#!/story/2002677
>
> Let's follow these up in the coming meetings, Tuesday 1000UTC and Thursday
> 1700UTC.
>
> You can always consult this page [1] for future meetings.
>
> Cheers,
> Spyros
>
> [1] https://wiki.openstack.org/wiki/Meetings/Containers
>
> On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis  wrote:
>
>> Hello list,
>>
>> We are going to have a second weekly meeting for magnum for 3 weeks
>> as a test to reach out to contributors in the Americas.
>>
>> You can join us tomorrow (or today for some?) at 1700UTC in
>> #openstack-containers .
>>
>> Cheers,
>> Spyros
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Cheers & Best regards,
> Feilong Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Lingxian Kong
Huge +1


Cheers,
Lingxian Kong

On Tue, Jul 17, 2018 at 7:04 PM, Yatin Karel  wrote:

> +2 Well deserved.
>
> Welcome Feilong and Thanks for all the Great Work!!!
>
>
> Regards
> Yatin Karel
>
> On Tue, Jul 17, 2018 at 12:27 PM, Spyros Trigazis 
> wrote:
> > Hello list,
> >
> > I'm excited to nominate Feilong as Core Reviewer for the Magnum project.
> >
> > Feilong has contributed many features like Calico as an alternative CNI
> for
> > kubernetes, make coredns scale proportionally to the cluster, improved
> > admin operations on clusters and improved multi-master deployments. Apart
> > from contributing to the project he has been contributing to other
> projects
> > like gophercloud and shade, he has been very helpful with code reviews
> > and he tests and reviews all patches that are coming in. Finally, he is
> very
> > responsive on IRC and in the ML.
> >
> > Thanks for all your contributions Feilong, I'm looking forward to working
> > with
> > you more!
> >
> > Cheers,
> > Spyros
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Yatin Karel
+2 Well deserved.

Welcome Feilong and Thanks for all the Great Work!!!


Regards
Yatin Karel

On Tue, Jul 17, 2018 at 12:27 PM, Spyros Trigazis  wrote:
> Hello list,
>
> I'm excited to nominate Feilong as Core Reviewer for the Magnum project.
>
> Feilong has contributed many features like Calico as an alternative CNI for
> kubernetes, make coredns scale proportionally to the cluster, improved
> admin operations on clusters and improved multi-master deployments. Apart
> from contributing to the project he has been contributing to other projects
> like gophercloud and shade, he has been very helpful with code reviews
> and he tests and reviews all patches that are coming in. Finally, he is very
> responsive on IRC and in the ML.
>
> Thanks for all your contributions Feilong, I'm looking forward to working
> with
> you more!
>
> Cheers,
> Spyros
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Nominate Feilong Wang for Core Reviewer

2018-07-17 Thread Spyros Trigazis
Hello list,

I'm excited to nominate Feilong as Core Reviewer for the Magnum project.

Feilong has contributed many features like Calico as an alternative CNI for
kubernetes, make coredns scale proportionally to the cluster, improved
admin operations on clusters and improved multi-master deployments. Apart
from contributing to the project he has been contributing to other projects
like gophercloud and shade, he has been very helpful with code reviews
and he tests and reviews all patches that are coming in. Finally, he is very
responsive on IRC and in the ML.

Thanks for all your contributions Feilong, I'm looking forward to working
with
you more!

Cheers,
Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Problems with multi-regional OpenStack installation

2018-06-28 Thread Fei Long Wang
Hi Andrei,

Thanks for raising this issue. I'm keen to review and happy to help. I
just done a quick look for https://review.openstack.org/#/c/578356, it
looks good for me.

As for heat-container-eingine issue, it's probably a bug. I will test an
propose a patch, which needs to release a new image then. Will update
progress here. Cheers.



On 28/06/18 19:11, Andrei Ozerov wrote:
> Greetings.
>
> Has anyone successfully deployed Magnum in the multi-regional
> OpenStack installation?
> In my case different services (Nova, Heat) have different public
> endpoint in every region. I couldn't start Kube-apiserver until I
> added "region" to a kube_openstack_config.
> I created a story with full description of that problem:
> https://storyboard.openstack.org/#!/story/2002728
>  and opened a
> review with a small fix: https://review.openstack.org/#/c/578356.
>
> But apart from that I have another problem with such kind of OpenStack
> installation.
> Say I have two regions. When I create a cluster in the second
> OpenStack region, Heat-container-engine tries to fetch Stack data from
> the first region.
> It then throws the following error: "The Stack (hame-uuid) could not
> be found". I can see GET requests for that stack in logs of Heat-API
> in the first region but I don't see them in the second one (where that
> Heat stack actually exists).
>
> I'm assuming that Heat-container-engine doesn't pass "region_name"
> when it searches for Heat endpoints:
> https://github.com/openstack/magnum/blob/master/magnum/drivers/common/image/heat-container-agent/scripts/heat-config-notify#L149.
> I've tried to change it but it's tricky because the
> Heat-container-engine is installed via Docker system-image and it
> won't work after restart if it's failed in the initial bootstrap
> (because /var/run/heat-config/heat-config is empty).
> Can someone help me with that? I guess it's better to create a
> separate story for that issue?
>
> -- 
> Ozerov Andrei
> oze...@selectel.com 
> +7 (800) 555 06 75
> 
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Problems with multi-regional OpenStack installation

2018-06-28 Thread Andrei Ozerov
Greetings.

Has anyone successfully deployed Magnum in the multi-regional OpenStack
installation?
In my case different services (Nova, Heat) have different public endpoint
in every region. I couldn't start Kube-apiserver until I added "region" to
a kube_openstack_config.
I created a story with full description of that problem:
https://storyboard.openstack.org/#!/story/2002728 and opened a review with
a small fix: https://review.openstack.org/#/c/578356.

But apart from that I have another problem with such kind of OpenStack
installation.
Say I have two regions. When I create a cluster in the second OpenStack
region, Heat-container-engine tries to fetch Stack data from the first
region.
It then throws the following error: "The Stack (hame-uuid) could not be
found". I can see GET requests for that stack in logs of Heat-API in the
first region but I don't see them in the second one (where that Heat stack
actually exists).

I'm assuming that Heat-container-engine doesn't pass "region_name" when it
searches for Heat endpoints:
https://github.com/openstack/magnum/blob/master/magnum/drivers/common/image/heat-container-agent/scripts/heat-config-notify#L149
.
I've tried to change it but it's tricky because the Heat-container-engine
is installed via Docker system-image and it won't work after restart if
it's failed in the initial bootstrap (because
/var/run/heat-config/heat-config is empty).
Can someone help me with that? I guess it's better to create a separate
story for that issue?

-- 
Ozerov Andrei
oze...@selectel.com
+7 (800) 555 06 75

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-06-25 Thread Fei Long Wang
Hi Spyros,

Thanks for posting the discussion output. I'm not sure I can follow the
idea of simplifying CNI configuration. Though we have both calico and
flannel for k8s, but if we put both of them into single one config
script. The script could be very complex. That's why I think we should
define some naming and logging rules/policies for those scripts for long
term maintenance to make our life easier. Thoughts?


On 25/06/18 19:20, Spyros Trigazis wrote:
> Hello again,
>
> After Thursday's meeting I want to summarize what we discussed and add
> some pointers.
>
>   * Work on using the out-of-tree cloud provider and move to the new
> model of defining it
> https://storyboard.openstack.org/#!/story/1762743
> 
> https://review.openstack.org/#/c/577477/
>   * Configure kubelet and kube-proxy on master nodes
> This story of the master node label can be
> extened https://storyboard.openstack.org/#!/story/2002618
> 
> or we can add a new one
>   * Simplify CNI configuration, we have calico and flannel. Ideally we
> should a single config script for each
> one. We could move flannel to the kubernetes hosted version that
> uses kubernetes objects for storage.
> (it is the recommended way by flannel and how it is done with kubeadm)
>   * magum support in gophercloud
> https://github.com/gophercloud/gophercloud/issues/1003
>   * *needs discussion *update version of heat templates (pike or
> queens) This need its own tread
>   * Post deployment scripts for clusters, I have this since some time
> for my but doing it in
> heat is slightly (not a lot) complicated. Most magnum users favor 
> the simpler solution
> of passing a url of a manifest or script to the cluster (at least
> let's add sha512sum).
>   * Simplify addition of custom labels/parameters. To avoid patcing
> magnum, it would be
> more ops friendly to have a generic field of custom parameters
>
> Not discussed in the last meeting but we should in the next ones:
>
>   * Allow cluster scaling from different users in the same project
> https://storyboard.openstack.org/#!/story/2002648
> 
>   * Add the option to remove node from a resource group for swarm
> clusters like
> in kubernetes
> https://storyboard.openstack.org/#!/story/2002677
> 
>
> Let's follow these up in the coming meetings, Tuesday 1000UTC and
> Thursday 1700UTC.
>
> You can always consult this page [1] for future meetings.
>
> Cheers,
> Spyros
>
> [1] https://wiki.openstack.org/wiki/Meetings/Containers
>
> On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis  > wrote:
>
> Hello list,
>
> We are going to have a second weekly meeting for magnum for 3 weeks
> as a test to reach out to contributors in the Americas.
>
> You can join us tomorrow (or today for some?) at 1700UTC in
> #openstack-containers .
>
> Cheers,
> Spyros
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-06-25 Thread Spyros Trigazis
Hello again,

After Thursday's meeting I want to summarize what we discussed and add some
pointers.


   - Work on using the out-of-tree cloud provider and move to the new model
   of defining it
   https://storyboard.openstack.org/#!/story/1762743
   https://review.openstack.org/#/c/577477/
   - Configure kubelet and kube-proxy on master nodes
   This story of the master node label can be extened
   https://storyboard.openstack.org/#!/story/2002618
   or we can add a new one
   - Simplify CNI configuration, we have calico and flannel. Ideally we
   should a single config script for each
   one. We could move flannel to the kubernetes hosted version that uses
   kubernetes objects for storage.
   (it is the recommended way by flannel and how it is done with kubeadm)
   - magum support in gophercloud
   https://github.com/gophercloud/gophercloud/issues/1003
   - *needs discussion *update version of heat templates (pike or queens)
   This need its own tread
   - Post deployment scripts for clusters, I have this since some time for
   my but doing it in
   heat is slightly (not a lot) complicated. Most magnum users favor  the
   simpler solution
   of passing a url of a manifest or script to the cluster (at least let's
   add sha512sum).
   - Simplify addition of custom labels/parameters. To avoid patcing
   magnum, it would be
   more ops friendly to have a generic field of custom parameters

Not discussed in the last meeting but we should in the next ones:

   - Allow cluster scaling from different users in the same project
   https://storyboard.openstack.org/#!/story/2002648
   - Add the option to remove node from a resource group for swarm clusters
   like
   in kubernetes
   https://storyboard.openstack.org/#!/story/2002677

Let's follow these up in the coming meetings, Tuesday 1000UTC and Thursday
1700UTC.

You can always consult this page [1] for future meetings.

Cheers,
Spyros

[1] https://wiki.openstack.org/wiki/Meetings/Containers

On Wed, 20 Jun 2018 at 18:05, Spyros Trigazis  wrote:

> Hello list,
>
> We are going to have a second weekly meeting for magnum for 3 weeks
> as a test to reach out to contributors in the Americas.
>
> You can join us tomorrow (or today for some?) at 1700UTC in
> #openstack-containers .
>
> Cheers,
> Spyros
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-20 Thread Remo Mattei
Thanks Fei, 
I did post the question on that channel no much noise there though.. I would 
really like to get this configured since we are pushing for production. 

Thanks 

> On Jun 20, 2018, at 8:27 PM, Fei Long Wang  wrote:
> 
> Hi Remo,
> 
> I can't see obvious issue from the log you posted. You can pop up at 
> #openstack-containers IRC channel as for Magnum questions. Cheers.
> 
> 
> On 21/06/18 08:56, Remo Mattei wrote:
>> Hello guys, what will be the right channel to as a question about having K8 
>> (magnum working with Tripleo)? 
>> 
>> I have the following errors..
>> 
>> http://pastebin.mattei.co/index.php/view/2d1156f1 
>> 
>> 
>> Any tips are appreciated. 
>> 
>> Thanks 
>> Remo 
>> 
>>> On Jun 19, 2018, at 2:13 PM, Fei Long Wang >> > wrote:
>>> 
>>> Hi there,
>>> 
>>> For people who maybe still interested in this issue. I have proposed a 
>>> patch, see https://review.openstack.org/576029 
>>>  And I have verified with Sonobuoy for 
>>> both multi masters (3 master nodes) and single master clusters, all worked. 
>>> Any comments will be appreciated. Thanks.
>>> 
>>> 
>>> On 21/05/18 01:22, Sergey Filatov wrote:
 Hi!
 I’d like to initiate a discussion about this bug: [1].
 To resolve this issue we need to generate a secret cert and pass it to 
 master nodes. We also need to store it somewhere to support scaling.
 This issue is specific for kubernetes drivers. Currently in magnum we have 
 a general cert manager which is the same for all the drivers.
 
 What do you think about moving cert_manager logic into a driver-specific 
 area?
 Having this common cert_manager logic forces us to generate client cert 
 with “admin” and “system:masters” subject & organisation names [2], 
 which is really something that we need only for kubernetes drivers.
 
 [1] https://bugs.launchpad.net/magnum/+bug/1766546 
 
 [2] 
 https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64
  
 
 
 
 ..Sergey Filatov
 
 
 
> On 20 Apr 2018, at 20:57, Sergey Filatov  > wrote:
> 
> Hello,
> 
> I looked into k8s drivers for magnum I see that each api-server on master 
> node generates it’s own service-account-key-file. This causes issues with 
> service-accounts authenticating on api-server. (In case api-server 
> endpoint moves).
> As far as I understand we should have either all api-server keys synced 
> on api-servesr or pre-generate single api-server key.
> 
> What is the way for magnum to get over this issue?
 
 
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
 
>>> 
>>> -- 
>>> Cheers & Best regards,
>>> Feilong Wang (王飞龙)
>>> --
>>> Senior Cloud Software Engineer
>>> Tel: +64-48032246
>>> Email: flw...@catalyst.net.nz 
>>> Catalyst IT Limited
>>> Level 6, Catalyst House, 150 Willis Street, Wellington
>>> -- 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
>>> ?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>>> 
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> -- 
> Cheers & Best regards,
> Feilong Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz 

Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-20 Thread Fei Long Wang
Hi Remo,

I can't see obvious issue from the log you posted. You can pop up at
#openstack-containers IRC channel as for Magnum questions. Cheers.


On 21/06/18 08:56, Remo Mattei wrote:
> Hello guys, what will be the right channel to as a question about
> having K8 (magnum working with Tripleo)? 
>
> I have the following errors..
>
> http://pastebin.mattei.co/index.php/view/2d1156f1
>
> Any tips are appreciated. 
>
> Thanks 
> Remo 
>
>> On Jun 19, 2018, at 2:13 PM, Fei Long Wang > > wrote:
>>
>> Hi there,
>>
>> For people who maybe still interested in this issue. I have proposed
>> a patch, see https://review.openstack.org/576029 And I have verified
>> with Sonobuoy for both multi masters (3 master nodes) and single
>> master clusters, all worked. Any comments will be appreciated. Thanks.
>>
>>
>> On 21/05/18 01:22, Sergey Filatov wrote:
>>> Hi!
>>> I’d like to initiate a discussion about this bug: [1].
>>> To resolve this issue we need to generate a secret cert and pass it
>>> to master nodes. We also need to store it somewhere to support scaling.
>>> This issue is specific for kubernetes drivers. Currently in magnum
>>> we have a general cert manager which is the same for all the drivers.
>>>
>>> What do you think about moving cert_manager logic into a
>>> driver-specific area?
>>> Having this common cert_manager logic forces us to generate client
>>> cert with “admin” and “system:masters” subject & organisation names
>>> [2], 
>>> which is really something that we need only for kubernetes drivers.
>>>
>>> [1] https://bugs.launchpad.net/magnum/+bug/1766546
>>> [2] 
>>> https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64
>>>
>>>
>>> ..Sergey Filatov
>>>
>>>
>>>
 On 20 Apr 2018, at 20:57, Sergey Filatov >>> > wrote:

 Hello,

 I looked into k8s drivers for magnum I see that each api-server on
 master node generates it’s own service-account-key-file. This
 causes issues with service-accounts authenticating on api-server.
 (In case api-server endpoint moves).
 As far as I understand we should have either all api-server keys
 synced on api-servesr or pre-generate single api-server key.

 What is the way for magnum to get over this issue?
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> -- 
>> Cheers & Best regards,
>> Feilong Wang (王飞龙)
>> --
>> Senior Cloud Software Engineer
>> Tel: +64-48032246
>> Email: flw...@catalyst.net.nz
>> Catalyst IT Limited
>> Level 6, Catalyst House, 150 Willis Street, Wellington
>> -- 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-20 Thread Remo Mattei
Hello guys, what will be the right channel to as a question about having K8 
(magnum working with Tripleo)? 

I have the following errors..

http://pastebin.mattei.co/index.php/view/2d1156f1

Any tips are appreciated. 

Thanks 
Remo 

> On Jun 19, 2018, at 2:13 PM, Fei Long Wang  wrote:
> 
> Hi there,
> 
> For people who maybe still interested in this issue. I have proposed a patch, 
> see https://review.openstack.org/576029  
> And I have verified with Sonobuoy for both multi masters (3 master nodes) and 
> single master clusters, all worked. Any comments will be appreciated. Thanks.
> 
> 
> On 21/05/18 01:22, Sergey Filatov wrote:
>> Hi!
>> I’d like to initiate a discussion about this bug: [1].
>> To resolve this issue we need to generate a secret cert and pass it to 
>> master nodes. We also need to store it somewhere to support scaling.
>> This issue is specific for kubernetes drivers. Currently in magnum we have a 
>> general cert manager which is the same for all the drivers.
>> 
>> What do you think about moving cert_manager logic into a driver-specific 
>> area?
>> Having this common cert_manager logic forces us to generate client cert with 
>> “admin” and “system:masters” subject & organisation names [2], 
>> which is really something that we need only for kubernetes drivers.
>> 
>> [1] https://bugs.launchpad.net/magnum/+bug/1766546 
>> 
>> [2] 
>> https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64
>>  
>> 
>> 
>> 
>> ..Sergey Filatov
>> 
>> 
>> 
>>> On 20 Apr 2018, at 20:57, Sergey Filatov >> > wrote:
>>> 
>>> Hello,
>>> 
>>> I looked into k8s drivers for magnum I see that each api-server on master 
>>> node generates it’s own service-account-key-file. This causes issues with 
>>> service-accounts authenticating on api-server. (In case api-server endpoint 
>>> moves).
>>> As far as I understand we should have either all api-server keys synced on 
>>> api-servesr or pre-generate single api-server key.
>>> 
>>> What is the way for magnum to get over this issue?
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
>> 
> 
> -- 
> Cheers & Best regards,
> Feilong Wang (王飞龙)
> --
> Senior Cloud Software Engineer
> Tel: +64-48032246
> Email: flw...@catalyst.net.nz 
> Catalyst IT Limited
> Level 6, Catalyst House, 150 Willis Street, Wellington
> -- 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] New temporary meeting on Thursdays 1700UTC

2018-06-20 Thread Spyros Trigazis
Hello list,

We are going to have a second weekly meeting for magnum for 3 weeks
as a test to reach out to contributors in the Americas.

You can join us tomorrow (or today for some?) at 1700UTC in
#openstack-containers .

Cheers,
Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-06-19 Thread Fei Long Wang
Hi there,

For people who maybe still interested in this issue. I have proposed a
patch, see https://review.openstack.org/576029 And I have verified with
Sonobuoy for both multi masters (3 master nodes) and single master
clusters, all worked. Any comments will be appreciated. Thanks.


On 21/05/18 01:22, Sergey Filatov wrote:
> Hi!
> I’d like to initiate a discussion about this bug: [1].
> To resolve this issue we need to generate a secret cert and pass it to
> master nodes. We also need to store it somewhere to support scaling.
> This issue is specific for kubernetes drivers. Currently in magnum we
> have a general cert manager which is the same for all the drivers.
>
> What do you think about moving cert_manager logic into a
> driver-specific area?
> Having this common cert_manager logic forces us to generate client
> cert with “admin” and “system:masters” subject & organisation names [2], 
> which is really something that we need only for kubernetes drivers.
>
> [1] https://bugs.launchpad.net/magnum/+bug/1766546
> [2] 
> https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64
>
>
> ..Sergey Filatov
>
>
>
>> On 20 Apr 2018, at 20:57, Sergey Filatov > > wrote:
>>
>> Hello,
>>
>> I looked into k8s drivers for magnum I see that each api-server on
>> master node generates it’s own service-account-key-file. This causes
>> issues with service-accounts authenticating on api-server. (In case
>> api-server endpoint moves).
>> As far as I understand we should have either all api-server keys
>> synced on api-servesr or pre-generate single api-server key.
>>
>> What is the way for magnum to get over this issue?
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-05-20 Thread Sergey Filatov
Hi!
I’d like to initiate a discussion about this bug: [1].
To resolve this issue we need to generate a secret cert and pass it to master 
nodes. We also need to store it somewhere to support scaling.
This issue is specific for kubernetes drivers. Currently in magnum we have a 
general cert manager which is the same for all the drivers.

What do you think about moving cert_manager logic into a driver-specific area?
Having this common cert_manager logic forces us to generate client cert with 
“admin” and “system:masters” subject & organisation names [2], 
which is really something that we need only for kubernetes drivers.

[1] https://bugs.launchpad.net/magnum/+bug/1766546 

[2] 
https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64
 



..Sergey Filatov



> On 20 Apr 2018, at 20:57, Sergey Filatov  wrote:
> 
> Hello,
> 
> I looked into k8s drivers for magnum I see that each api-server on master 
> node generates it’s own service-account-key-file. This causes issues with 
> service-accounts authenticating on api-server. (In case api-server endpoint 
> moves).
> As far as I understand we should have either all api-server keys synced on 
> api-servesr or pre-generate single api-server key.
> 
> What is the way for magnum to get over this issue?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] K8S apiserver key sync

2018-04-23 Thread Spyros Trigazis
Hi Sergey,

In magnum queens we can set the private ca as a service account key.
Here [1] we can set the ca.key file. When the label cert_manager_api is
set to true.

Cheers,
Spyros

[1]
https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh#L32

On 20 April 2018 at 19:57, Sergey Filatov  wrote:

> Hello,
>
> I looked into k8s drivers for magnum I see that each api-server on master
> node generates it’s own service-account-key-file. This causes issues with
> service-accounts authenticating on api-server. (In case api-server endpoint
> moves).
> As far as I understand we should have either all api-server keys synced on
> api-servesr or pre-generate single api-server key.
>
> What is the way for magnum to get over this issue?
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] K8S apiserver key sync

2018-04-20 Thread Sergey Filatov
Hello,

I looked into k8s drivers for magnum I see that each api-server on master node 
generates it’s own service-account-key-file. This causes issues with 
service-accounts authenticating on api-server. (In case api-server endpoint 
moves).
As far as I understand we should have either all api-server keys synced on 
api-servesr or pre-generate single api-server key.

What is the way for magnum to get over this issue?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-03-01 Thread Ricardo Rocha
Hi.

I had added an item for this:
https://bugs.launchpad.net/magnum/+bug/1752433

after the last reply and a bit of searching around.

It's not urgent but we already got a couple cases in our deployment.

Cheers,
Ricardo

On Thu, Mar 1, 2018 at 3:44 PM, Spyros Trigazis  wrote:
> Hello,
>
> After discussion with the keystone team at the above session, keystone
> will not provide a way to transfer trusts nor application credentials,
> since it doesn't address the above problem (the member that leaves the team
> can auth with keystone if he has the trust/app-creds).
>
> In magnum we need a way for admins and the cluster owner to rotate the
> trust or app-creds and certificates.
>
> We can leverage the existing rotate_ca api for rotating the ca and at the
> same
> time the trust. Since this api is designed only to rotate the ca, we can
> add a cluster action to transter ownership of the cluster. This action
> should be
> allowed to be executed by the admin or the current owner of a given cluster.
>
> At the same time, the trust created by heat for every stack suffers from the
> same problem, we should check with the heat team what is their plan.
>
> Cheers,
> Spyros
>
> On 27 February 2018 at 20:53, Ricardo Rocha  wrote:
>>
>> Hi Lance.
>>
>> On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad 
>> wrote:
>> >
>> >
>> > On 02/26/2018 10:17 AM, Ricardo Rocha wrote:
>> >> Hi.
>> >>
>> >> We have an issue on the way Magnum uses keystone trusts.
>> >>
>> >> Magnum clusters are created in a given project using HEAT, and require
>> >> a trust token to communicate back with OpenStack services -  there is
>> >> also integration with Kubernetes via a cloud provider.
>> >>
>> >> This trust belongs to a given user, not the project, so whenever we
>> >> disable the user's account - for example when a user leaves the
>> >> organization - the cluster becomes unhealthy as the trust is no longer
>> >> valid. Given the token is available in the cluster nodes, accessible
>> >> by users, a trust linked to a service account is also not a viable
>> >> solution.
>> >>
>> >> Is there an existing alternative for this kind of use case? I guess
>> >> what we might need is a trust that is linked to the project.
>> > This was proposed in the original application credential specification
>> > [0] [1]. The problem is that you're sharing an authentication mechanism
>> > with multiple people when you associate it to the life cycle of a
>> > project. When a user is deleted or removed from the project, nothing
>> > would stop them from accessing OpenStack APIs if the application
>> > credential or trust isn't rotated out. Even if the credential or trust
>> > were scoped to the project's life cycle, it would need to be rotated out
>> > and replaced when users come and go for the same reason. So it would
>> > still be associated to the user life cycle, just indirectly. Otherwise
>> > you're allowing unauthorized access to something that should be
>> > protected.
>> >
>> > If you're at the PTG - we will be having a session on application
>> > credentials tomorrow (Tuesday) afternoon [2] in the identity-integration
>> > room [3].
>>
>> Thanks for the reply, i now understand the issue.
>>
>> I'm not at the PTG. Had a look at the etherpad but it seems app
>> credentials will have a similar lifecycle so not suitable for the use
>> case above - for the same reasons you mention.
>>
>> I wonder what's the alternative to achieve what we need in Magnum?
>>
>> Cheers,
>>   Ricardo
>>
>> > [0] https://review.openstack.org/#/c/450415/
>> > [1] https://review.openstack.org/#/c/512505/
>> > [2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg
>> > [3] http://ptg.openstack.org/ptg.html
>> >>
>> >> I believe the same issue would be there using application credentials,
>> >> as the ownership is similar.
>> >>
>> >> Cheers,
>> >>   Ricardo
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for 

Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-03-01 Thread Spyros Trigazis
Hello,

After discussion with the keystone team at the above session, keystone
will not provide a way to transfer trusts nor application credentials,
since it doesn't address the above problem (the member that leaves the team
can auth with keystone if he has the trust/app-creds).

In magnum we need a way for admins and the cluster owner to rotate the
trust or app-creds and certificates.

We can leverage the existing rotate_ca api for rotating the ca and at the
same
time the trust. Since this api is designed only to rotate the ca, we can
add a cluster action to transter ownership of the cluster. This action
should be
allowed to be executed by the admin or the current owner of a given cluster.

At the same time, the trust created by heat for every stack suffers from the
same problem, we should check with the heat team what is their plan.

Cheers,
Spyros

On 27 February 2018 at 20:53, Ricardo Rocha  wrote:

> Hi Lance.
>
> On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad 
> wrote:
> >
> >
> > On 02/26/2018 10:17 AM, Ricardo Rocha wrote:
> >> Hi.
> >>
> >> We have an issue on the way Magnum uses keystone trusts.
> >>
> >> Magnum clusters are created in a given project using HEAT, and require
> >> a trust token to communicate back with OpenStack services -  there is
> >> also integration with Kubernetes via a cloud provider.
> >>
> >> This trust belongs to a given user, not the project, so whenever we
> >> disable the user's account - for example when a user leaves the
> >> organization - the cluster becomes unhealthy as the trust is no longer
> >> valid. Given the token is available in the cluster nodes, accessible
> >> by users, a trust linked to a service account is also not a viable
> >> solution.
> >>
> >> Is there an existing alternative for this kind of use case? I guess
> >> what we might need is a trust that is linked to the project.
> > This was proposed in the original application credential specification
> > [0] [1]. The problem is that you're sharing an authentication mechanism
> > with multiple people when you associate it to the life cycle of a
> > project. When a user is deleted or removed from the project, nothing
> > would stop them from accessing OpenStack APIs if the application
> > credential or trust isn't rotated out. Even if the credential or trust
> > were scoped to the project's life cycle, it would need to be rotated out
> > and replaced when users come and go for the same reason. So it would
> > still be associated to the user life cycle, just indirectly. Otherwise
> > you're allowing unauthorized access to something that should be
> protected.
> >
> > If you're at the PTG - we will be having a session on application
> > credentials tomorrow (Tuesday) afternoon [2] in the identity-integration
> > room [3].
>
> Thanks for the reply, i now understand the issue.
>
> I'm not at the PTG. Had a look at the etherpad but it seems app
> credentials will have a similar lifecycle so not suitable for the use
> case above - for the same reasons you mention.
>
> I wonder what's the alternative to achieve what we need in Magnum?
>
> Cheers,
>   Ricardo
>
> > [0] https://review.openstack.org/#/c/450415/
> > [1] https://review.openstack.org/#/c/512505/
> > [2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg
> > [3] http://ptg.openstack.org/ptg.html
> >>
> >> I believe the same issue would be there using application credentials,
> >> as the ownership is similar.
> >>
> >> Cheers,
> >>   Ricardo
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-27 Thread Ricardo Rocha
Hi Lance.

On Mon, Feb 26, 2018 at 4:45 PM, Lance Bragstad  wrote:
>
>
> On 02/26/2018 10:17 AM, Ricardo Rocha wrote:
>> Hi.
>>
>> We have an issue on the way Magnum uses keystone trusts.
>>
>> Magnum clusters are created in a given project using HEAT, and require
>> a trust token to communicate back with OpenStack services -  there is
>> also integration with Kubernetes via a cloud provider.
>>
>> This trust belongs to a given user, not the project, so whenever we
>> disable the user's account - for example when a user leaves the
>> organization - the cluster becomes unhealthy as the trust is no longer
>> valid. Given the token is available in the cluster nodes, accessible
>> by users, a trust linked to a service account is also not a viable
>> solution.
>>
>> Is there an existing alternative for this kind of use case? I guess
>> what we might need is a trust that is linked to the project.
> This was proposed in the original application credential specification
> [0] [1]. The problem is that you're sharing an authentication mechanism
> with multiple people when you associate it to the life cycle of a
> project. When a user is deleted or removed from the project, nothing
> would stop them from accessing OpenStack APIs if the application
> credential or trust isn't rotated out. Even if the credential or trust
> were scoped to the project's life cycle, it would need to be rotated out
> and replaced when users come and go for the same reason. So it would
> still be associated to the user life cycle, just indirectly. Otherwise
> you're allowing unauthorized access to something that should be protected.
>
> If you're at the PTG - we will be having a session on application
> credentials tomorrow (Tuesday) afternoon [2] in the identity-integration
> room [3].

Thanks for the reply, i now understand the issue.

I'm not at the PTG. Had a look at the etherpad but it seems app
credentials will have a similar lifecycle so not suitable for the use
case above - for the same reasons you mention.

I wonder what's the alternative to achieve what we need in Magnum?

Cheers,
  Ricardo

> [0] https://review.openstack.org/#/c/450415/
> [1] https://review.openstack.org/#/c/512505/
> [2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg
> [3] http://ptg.openstack.org/ptg.html
>>
>> I believe the same issue would be there using application credentials,
>> as the ownership is similar.
>>
>> Cheers,
>>   Ricardo
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-26 Thread Lance Bragstad


On 02/26/2018 10:17 AM, Ricardo Rocha wrote:
> Hi.
>
> We have an issue on the way Magnum uses keystone trusts.
>
> Magnum clusters are created in a given project using HEAT, and require
> a trust token to communicate back with OpenStack services -  there is
> also integration with Kubernetes via a cloud provider.
>
> This trust belongs to a given user, not the project, so whenever we
> disable the user's account - for example when a user leaves the
> organization - the cluster becomes unhealthy as the trust is no longer
> valid. Given the token is available in the cluster nodes, accessible
> by users, a trust linked to a service account is also not a viable
> solution.
>
> Is there an existing alternative for this kind of use case? I guess
> what we might need is a trust that is linked to the project.
This was proposed in the original application credential specification
[0] [1]. The problem is that you're sharing an authentication mechanism
with multiple people when you associate it to the life cycle of a
project. When a user is deleted or removed from the project, nothing
would stop them from accessing OpenStack APIs if the application
credential or trust isn't rotated out. Even if the credential or trust
were scoped to the project's life cycle, it would need to be rotated out
and replaced when users come and go for the same reason. So it would
still be associated to the user life cycle, just indirectly. Otherwise
you're allowing unauthorized access to something that should be protected.

If you're at the PTG - we will be having a session on application
credentials tomorrow (Tuesday) afternoon [2] in the identity-integration
room [3].

[0] https://review.openstack.org/#/c/450415/
[1] https://review.openstack.org/#/c/512505/
[2] https://etherpad.openstack.org/p/application-credentials-rocky-ptg
[3] http://ptg.openstack.org/ptg.html
>
> I believe the same issue would be there using application credentials,
> as the ownership is similar.
>
> Cheers,
>   Ricardo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][keystone] clusters, trustees and projects

2018-02-26 Thread Ricardo Rocha
Hi.

We have an issue on the way Magnum uses keystone trusts.

Magnum clusters are created in a given project using HEAT, and require
a trust token to communicate back with OpenStack services -  there is
also integration with Kubernetes via a cloud provider.

This trust belongs to a given user, not the project, so whenever we
disable the user's account - for example when a user leaves the
organization - the cluster becomes unhealthy as the trust is no longer
valid. Given the token is available in the cluster nodes, accessible
by users, a trust linked to a service account is also not a viable
solution.

Is there an existing alternative for this kind of use case? I guess
what we might need is a trust that is linked to the project.

I believe the same issue would be there using application credentials,
as the ownership is similar.

Cheers,
  Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Example bringup of Istio on Magnum k8s + Octavia

2018-02-19 Thread Timothy Swanson (tiswanso)
In case anyone is interested in the details, I went through the exercise of a 
basic bringup of Istio on Magnum k8s (with stable/pike):  
https://tiswanso.github.io/istio/istio_on_magnum.html

I hope to update with follow-on items that may also be explored, such as:
- Istio automatic side-car injection via adding the k8s admission controller 
during cluster create
- Add Raw VM app to istio service-mesh


Big thanks to Spyros (strigazi) for helping me through some magnum bringup 
snags.


—Tim Swanson
(tiswanso)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][release] release-post job for openstack/releases failed

2018-02-08 Thread Jeremy Stanley
On 2018-02-08 18:29:18 -0500 (-0500), Doug Hellmann wrote:
[...]
> Another alternative is to change the job configuration for magnum to use
> release-openstack-server instead of publish-to-pypi, at least for the
> near term. That would give the magnum team more time to make the changes
> need to modify the sdist name for the package.

And yet another (longer-term) alternative is:

https://www.python.org/dev/peps/pep-0541/#removal-of-an-abandoned-project

We're presently trying the same to gain use of the keystone name on
PyPI, and magnum's the only other service we have in that same boat
as far as I'm aware. In both cases the projects have basically been
dead for half a decade (and in the magnum case they never even seem
to have uploaded an initial package at all).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][release] release-post job for openstack/releases failed

2018-02-08 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2018-02-08 13:00:52 -0600:
> The release job for magnum failed, but luckily it was after tagging and
> branching the release. It was not able to get to the point of uploading a
> tarball to http://tarballs.openstack.org/magnum/ though.
> 
> The problem the job encountered is that magnum is now configured to publish to
> Pypi. The tricky part ends up being that the "magnum" package on Pypi is not
> this magnum project. It appears to be an older abandoned project by someone,
> not related to OpenStack.
> 
> There is an openstack-magnum registered. But since the setup.cfg file in
> openstack/magnum has "name = magnum", it attempts to publish to the one that 
> is
> not ours.
> 
> I have put up a patch to openstack/magnum to change the name to
> openstack-magnum here:
> 
> https://review.openstack.org/#/c/542371/
> 
> That, or something like it, will need to merge and be backported to
> stable/queens before we can get this project published.
> 
> If there are any questions, please feel free to drop in to the
> #openstack-release channel.
> 
> Thanks,
> Sean

Another alternative is to change the job configuration for magnum to use
release-openstack-server instead of publish-to-pypi, at least for the
near term. That would give the magnum team more time to make the changes
need to modify the sdist name for the package.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Release of openstack/magnum failed

2018-02-08 Thread Sean McGinnis
Apologies, I forwarded the wrong one just a bit ago. See below for the actual
links to the magnum release job failures if you wish to take a look.

Sean

- Forwarded message from z...@openstack.org -

Date: Thu, 08 Feb 2018 18:06:54 +
From: z...@openstack.org
To: release-job-failu...@lists.openstack.org
Subject: [Release-job-failures] Release of openstack/magnum failed
Reply-To: openstack-dev@lists.openstack.org

Build failed.

- release-openstack-python 
http://logs.openstack.org/df/dff1ac0f8248a75c39c5b9449de0b6c83906aff5/release/release-openstack-python/e923153/
 : POST_FAILURE in 7m 23s
- announce-release announce-release : SKIPPED
- propose-update-constraints propose-update-constraints : SKIPPED

___
Release-job-failures mailing list
release-job-failu...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures

- End forwarded message -

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][release] release-post job for openstack/releases failed

2018-02-08 Thread Sean McGinnis
The release job for magnum failed, but luckily it was after tagging and
branching the release. It was not able to get to the point of uploading a
tarball to http://tarballs.openstack.org/magnum/ though.

The problem the job encountered is that magnum is now configured to publish to
Pypi. The tricky part ends up being that the "magnum" package on Pypi is not
this magnum project. It appears to be an older abandoned project by someone,
not related to OpenStack.

There is an openstack-magnum registered. But since the setup.cfg file in
openstack/magnum has "name = magnum", it attempts to publish to the one that is
not ours.

I have put up a patch to openstack/magnum to change the name to
openstack-magnum here:

https://review.openstack.org/#/c/542371/

That, or something like it, will need to merge and be backported to
stable/queens before we can get this project published.

If there are any questions, please feel free to drop in to the
#openstack-release channel.

Thanks,
Sean

- Forwarded message from z...@openstack.org -

Date: Thu, 08 Feb 2018 17:09:44 +
From: z...@openstack.org
To: release-job-failu...@lists.openstack.org
Subject: [Release-job-failures] release-post job for openstack/releases failed
Reply-To: openstack-dev@lists.openstack.org

Build failed.

- tag-releases 
http://logs.openstack.org/11/1160e02315eaef3a8380af3d6dd9f707eccc214e/release-post/tag-releases/ff8305f/
 : TIMED_OUT in 32m 28s
- publish-static publish-static : SKIPPED

___
Release-job-failures mailing list
release-job-failu...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures

- End forwarded message -

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] New meeting time Tue 1000UTC

2018-02-05 Thread Spyros Trigazis
Hello,

Heads up, the containers team meeting has changed from 1600UTC to 1000UTC.

See you there tomorrow at #openstack-meeting-alt !
Spyros
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Rocky Magnum PTL candidacy

2018-02-03 Thread Spyros Trigazis
Dear Stackers,

I would like to nominate myself as PTL for the Magnum project for the
Rocky cycle.

I have been consistently contributing to Magnum since February 2016 and
I am a core reviewer since August 2016. Since then, I have contributed
to significant features like cluster drivers, add Magnum tests to Rally
(I'm core reviewer to rally to help the rally team with Magnum related
reviews), wrote Magnum's installation tutorial and served as docs
liaison for the project. My latest contributions include the swarm-mode
driver, containerization of the heat-agent and the remaining kubernetes
components, fixed the long standing problem of adding custom CAs to the
clusters and brought the kubernetes driver up to date, with RBAC
configuration and the latest kubernetes dashboard. I have been the
release liaison for Magnum for Pike and served as PTL for the Queens
release. I have contributed a lot in Magnum's CI jobs (adding
multi-node, DIB and new driver jobs). I have been working closely with
other projects consumed by Magnum like Heat, Fedora Atomic, kubernetes
python client and kubernetes rpms. Despite the slow down on development
due shortage of contributions, we managed to keep the project up to date
and increase the user base.

For the next cycle, I want to enable the Magnum team to complete the
work on cluster upgrades, cluster federation, cluster auto-healing,
support for different container runtimes and container network backends.

Thanks for considering me,
Spyros Trigazis

[0]
https://git.openstack.org/cgit/openstack/election/tree/candidates/rocky/Magnum/strigazi.txt?id=7a31af003f1be68ee81229c8c828716838e5b8dd
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] [ironic] Why does magnum create instances with ports using 'fixed-ips' ?

2018-01-30 Thread Waines, Greg
Any thoughts on this ?
Greg.

From: Greg Waines <greg.wai...@windriver.com>
Reply-To: "openstack-dev@lists.openstack.org" 
<openstack-dev@lists.openstack.org>
Date: Friday, January 19, 2018 at 3:10 PM
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Cc: "Nasir, Shoaib" <shoaib.na...@windriver.com>
Subject: [openstack-dev] [magnum] [ironic] Why does magnum create instances 
with ports using 'fixed-ips' ?

Hey there,

We have just recently integrated MAGNUM into our OpenStack Distribution.

QUESTION:
When MAGNUM is creating the ‘instances’ for the COE master and minion nodes,
WHY does it create the instances with ports using ‘fixed-ips’ ?
- instead of just letting the instance’s port dhcp for its 
ip-address ?

I am asking this question because:

· we have also integrated IRONIC into our OpenStack Distribution, and

o   currently support the simple (somewhat non-multi-tenant) networking approach
i.e.

§  ironic-provisioning-net TENANT NETWORK,
used to  network boot the IRONIC Instances,
is owned by ADMIN but shared so TENANTS can create IRONIC instances,

§  AND,
we do NOT support the functionality to have IRONIC update the
adjacent switch configuration in order to move the IRONIC instance
on to a different (TENANT-owned) TENANT NETWORK after the instance
is created.

o   so it is SORT OF multi-tenant in the sense that any TENANT can create an 
IRONIC instance,
HOWEVER the IRONIC instances of all tenants are all on the same TENANT NETWORK



· In this environment,
When we use MAGNUM to create IRONIC COE Nodes

o   it ONLY works if the ADMIN creates the MAGNUM Cluster,

o   it does NOT work if a TENANT creates the MAGNUM Cluster,

§  because a TENANT can NOT create an instance port with ‘fixed-ips’ on a 
TENANT NETWORK
that is not owned by himself.

appreciate any info on this,
Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Any plan to resume nodegroup work?

2018-01-29 Thread Wan-yen Hsu
Hi,

  I saw magnum nodegroup specs  https://review.openstack.org/425422,
https://review.openstack.org/433680, and
https://review.openstack.org/425431 were last updated a year ago.  is there
any plan to resume this work or is it superseded by other specs or features?

  Thanks!

Regards,
Wan-yen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] [ironic] Why does magnum create instances with ports using 'fixed-ips' ?

2018-01-19 Thread Waines, Greg
Hey there,

We have just recently integrated MAGNUM into our OpenStack Distribution.

QUESTION:
When MAGNUM is creating the ‘instances’ for the COE master and minion nodes,
WHY does it create the instances with ports using ‘fixed-ips’ ?
- instead of just letting the instance’s port dhcp for its 
ip-address ?

I am asking this question because:

· we have also integrated IRONIC into our OpenStack Distribution, and

ocurrently support the simple (somewhat non-multi-tenant) networking 
approach
i.e.

§  ironic-provisioning-net TENANT NETWORK,
used to  network boot the IRONIC Instances,
is owned by ADMIN but shared so TENANTS can create IRONIC instances,

§  AND,
we do NOT support the functionality to have IRONIC update the
adjacent switch configuration in order to move the IRONIC instance
on to a different (TENANT-owned) TENANT NETWORK after the instance
is created.

oso it is SORT OF multi-tenant in the sense that any TENANT can create an 
IRONIC instance,
HOWEVER the IRONIC instances of all tenants are all on the same TENANT NETWORK


· In this environment,
When we use MAGNUM to create IRONIC COE Nodes

oit ONLY works if the ADMIN creates the MAGNUM Cluster,

oit does NOT work if a TENANT creates the MAGNUM Cluster,

§  because a TENANT can NOT create an instance port with ‘fixed-ips’ on a 
TENANT NETWORK
that is not owned by himself.

appreciate any info on this,
Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] fedora atomic image with kubernetes with a CRI = frakti or clear containers

2018-01-09 Thread Spyros Trigazis
Hi Greg,

You can try to build an image with this process [1]. I haven't used for
some time since
we rely on the upstream image.

Another option that I would like to investigate  is to build a system
container with
frakti or clear container similar to these container images [2] [3] [4].
Then you can install
that container on the atomic host.

We could discuss this during the magnum meeting today at 16h00 UTC in
#openstack-meeting-alt [5].

Cheers,
Spyros

[1]
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/image/fedora-atomic/README.rst
[2]
https://github.com/kubernetes-incubator/cri-o/tree/master/contrib/system_containers/fedora
[3]
https://github.com/projectatomic/atomic-system-containers/tree/master/docker-centos
[4]
https://gitlab.cern.ch/cloud/atomic-system-containers/tree/cern-qa/docker-centos
[5] https://wiki.openstack.org/wiki/Meetings/Containers

On 8 January 2018 at 16:42, Waines, Greg  wrote:

> Hey there,
>
>
>
> I am currently running magnum with the fedora-atomic image that is
> installed as part of the devstack installation of magnum.
>
> This fedora-atomic image has kubernetes with a CRI of the standard docker
> container.
>
>
>
> Where can i find (or how do i build) a fedora-atomic image with kubernetes
> and either frakti or clear containers (runV) as the CRI ?
>
>
>
> Greg.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] fedora atomic image with kubernetes with a CRI = frakti or clear containers

2018-01-08 Thread Waines, Greg
Hey there,

I am currently running magnum with the fedora-atomic image that is installed as 
part of the devstack installation of magnum.
This fedora-atomic image has kubernetes with a CRI of the standard docker 
container.

Where can i find (or how do i build) a fedora-atomic image with kubernetes and 
either frakti or clear containers (runV) as the CRI ?

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-28 Thread Sergio Morales Acuña
Can you help explain or point me to more information about your comments on
this:

"For RBAC, you need 1.8 and with Pike you can get it. just by changing
one parameter." I checked the repo on github and RBAC was referenced only
in a comment.  No labels. What parameter?

"In fedora atomic 27 kubernetes etcd and flannel are removed from the
base image so running them in containers is the only way". I tried this on
my new Pike cloud but only kubernetes-* was running on containers. No signs
of etcd or flannel.  Do I need a special image of atomic 27? I can't find
the install process of etcd in the driver code (i'm using magnum 5.0.1).

Thanks for your help.

El mié., 22 nov. 2017 a las 5:30, Spyros Trigazis ()
escribió:

> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña 
> wrote:
> > I'm using Openstack Ocata and trying Magnum.
> >
> > I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve
> them
> for everyone else?
>
> >
> > Now I'm curious about some aspects of Magnum:
> >
> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> > create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
> >
> > ¿Any one here using Magnum on daily basis? If yes, What version are you
> > using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
> running
> Pike and we use only the fedora atomic drivers.
>
> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
> >
> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> > upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
> >
> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option
> and can
> keep up with kubernetes easily.
>
> >
> > ¿Where I can found updated articles about the state of Magnum and it's
> > future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
> >
> > Cheers
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-24 Thread Spyros Trigazis
Hi Sergio,

On 22 November 2017 at 20:37, Sergio Morales Acuña  wrote:
> Dear Spyros:
>
> Thanks for your answer. I'm moving my cloud to Pike!.
>
> The problems I encountered were with the TCP listeners for the etcd's
> LoadBalancer and the "curl -sf" from the nodes to the etcd LB (I have to put
> a -k).

[1] [2] the certs are passed to curl. Is there another issue and you need -k ?

[1] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/kubernetes/fragments/network-config-service.sh?h=stable%2Focata#n50
[2] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/network-config-service.sh?h=stable/ocata#n56

>
> I'm using Kolla Binary with Centos 7, so I also have problems with kubernets
> python libreries (they needed updates to be able to handle IPADDRESS on
> certificates)

I think this problem is fixed in ocata [3], what did you have to change?

[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/kubernetes/fragments/make-cert.sh?h=stable%2Focata

>
> Cheers and thanks again.

If you discover any bugs please report them and if you need anything free
to ask here or in #openstack-containers.

Cheers,
Spyros

>
>
> El mié., 22 nov. 2017 a las 5:30, Spyros Trigazis ()
> escribió:
>>
>> Hi Sergio,
>>
>> On 22 November 2017 at 03:31, Sergio Morales Acuña 
>> wrote:
>> > I'm using Openstack Ocata and trying Magnum.
>> >
>> > I encountered a lot of problems but I been able to solved many of them.
>>
>> Which problems did you encounter? Can you be more specific? Can we solve
>> them
>> for everyone else?
>>
>> >
>> > Now I'm curious about some aspects of Magnum:
>> >
>> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
>> > create a custom fedora-atomic-27? What about RBAC?
>>
>> Since Pike, magnum is running kubernetes in containers on fedora 26.
>> In fedora atomic 27 kubernetes etcd and flannel are removed from the
>> base image so running them in containers is the only way.
>>
>> For RBAC, you need 1.8 and with Pike you can get it. just by changing
>> one parameter.
>>
>> >
>> > ¿Any one here using Magnum on daily basis? If yes, What version are you
>> > using?
>>
>> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
>> running
>> Pike and we use only the fedora atomic drivers.
>>
>> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
>> Vexxhost is running magnum:
>> https://vexxhost.com/public-cloud/container-services/kubernetes/
>> Stackhpc:
>> https://www.stackhpc.com/baremetal-cloud-capacity.html
>>
>> >
>> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need
>> > to
>> > upgrade Magnum to follow K8S's crazy changes?
>>
>> Atomic is maintained and supported much more than CoreOS in magnum.
>> There wasn't much interest from developers for CoreOS.
>>
>> >
>> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>>
>> Magnum Ocata is not too old but it will eventually be since it misses the
>> capability of running kubernetes on containers. Pike allows this option
>> and can
>> keep up with kubernetes easily.
>>
>> >
>> > ¿Where I can found updated articles about the state of Magnum and it's
>> > future?
>>
>> I did the project update presentation for magnum at the Sydney summit.
>> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>>
>> Chees,
>> Spyros
>>
>> >
>> > Cheers
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Sergio Morales Acuña
Dear Spyros:

Thanks for your answer. I'm moving my cloud to Pike!.

The problems I encountered were with the TCP listeners for the etcd's
LoadBalancer and the "curl -sf" from the nodes to the etcd LB (I have to
put a -k).

I'm using Kolla Binary with Centos 7, so I also have problems with
kubernets python libreries (they needed updates to be able to handle
IPADDRESS on certificates)

Cheers and thanks again.

El mié., 22 nov. 2017 a las 5:30, Spyros Trigazis ()
escribió:

> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña 
> wrote:
> > I'm using Openstack Ocata and trying Magnum.
> >
> > I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve
> them
> for everyone else?
>
> >
> > Now I'm curious about some aspects of Magnum:
> >
> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> > create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
> >
> > ¿Any one here using Magnum on daily basis? If yes, What version are you
> > using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
> running
> Pike and we use only the fedora atomic drivers.
>
> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
> >
> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> > upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
> >
> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option
> and can
> keep up with kubernetes easily.
>
> >
> > ¿Where I can found updated articles about the state of Magnum and it's
> > future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
> >
> > Cheers
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Hongbin Lu
As a record, if magnum team doesn't interest to maintain the CoreOS driver,
it is an indication that this driver should be spitted out and maintained
by another team. CoreOS is one of the prevailing container OS. I believe
there will be a lot of interests after the split.

Disclaim: I am an author of the CoreOS driver

Best regards,
Hongbin

On Wed, Nov 22, 2017 at 3:29 AM, Spyros Trigazis  wrote:

> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña 
> wrote:
> > I'm using Openstack Ocata and trying Magnum.
> >
> > I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve
> them
> for everyone else?
>
> >
> > Now I'm curious about some aspects of Magnum:
> >
> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> > create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
> >
> > ¿Any one here using Magnum on daily basis? If yes, What version are you
> > using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
> running
> Pike and we use only the fedora atomic drivers.
> http://openstack-in-production.blogspot.ch/2017/
> 01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
> >
> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> > upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
> >
> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option
> and can
> keep up with kubernetes easily.
>
> >
> > ¿Where I can found updated articles about the state of Magnum and it's
> > future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
> >
> > Cheers
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Spyros Trigazis
I forgot to include the Pike release notes
https://docs.openstack.org/releasenotes/magnum/pike.html

Spyros

On 22 November 2017 at 09:29, Spyros Trigazis  wrote:
> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña  wrote:
>> I'm using Openstack Ocata and trying Magnum.
>>
>> I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve them
> for everyone else?
>
>>
>> Now I'm curious about some aspects of Magnum:
>>
>> ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
>> create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
>>
>> ¿Any one here using Magnum on daily basis? If yes, What version are you
>> using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are 
> running
> Pike and we use only the fedora atomic drivers.
> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
>>
>> ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
>> upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
>>
>> ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option and 
> can
> keep up with kubernetes easily.
>
>>
>> ¿Where I can found updated articles about the state of Magnum and it's
>> future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
>>
>> Cheers
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Spyros Trigazis
Hi Sergio,

On 22 November 2017 at 03:31, Sergio Morales Acuña  wrote:
> I'm using Openstack Ocata and trying Magnum.
>
> I encountered a lot of problems but I been able to solved many of them.

Which problems did you encounter? Can you be more specific? Can we solve them
for everyone else?

>
> Now I'm curious about some aspects of Magnum:
>
> ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> create a custom fedora-atomic-27? What about RBAC?

Since Pike, magnum is running kubernetes in containers on fedora 26.
In fedora atomic 27 kubernetes etcd and flannel are removed from the
base image so running them in containers is the only way.

For RBAC, you need 1.8 and with Pike you can get it. just by changing
one parameter.

>
> ¿Any one here using Magnum on daily basis? If yes, What version are you
> using?

In our private cloud at CERN we have ~120 clusters with ~450 vms, we are running
Pike and we use only the fedora atomic drivers.
http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
Vexxhost is running magnum:
https://vexxhost.com/public-cloud/container-services/kubernetes/
Stackhpc:
https://www.stackhpc.com/baremetal-cloud-capacity.html

>
> ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> upgrade Magnum to follow K8S's crazy changes?

Atomic is maintained and supported much more than CoreOS in magnum.
There wasn't much interest from developers for CoreOS.

>
> ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?

Magnum Ocata is not too old but it will eventually be since it misses the
capability of running kubernetes on containers. Pike allows this option and can
keep up with kubernetes easily.

>
> ¿Where I can found updated articles about the state of Magnum and it's
> future?

I did the project update presentation for magnum at the Sydney summit.
https://www.openstack.org/videos/sydney-2017/magnum-project-update

Chees,
Spyros

>
> Cheers
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-21 Thread Sergio Morales Acuña
I'm using Openstack Ocata and trying Magnum.

I encountered a lot of problems but I been able to solved many of them.

Now I'm curious about some aspects of Magnum:

¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
create a custom fedora-atomic-27? What about RBAC?

¿Any one here using Magnum on daily basis? If yes, What version are you
using?

¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
upgrade Magnum to follow K8S's crazy changes?

¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?

¿Where I can found updated articles about the state of Magnum and it's
future?

Cheers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-02 Thread Spyros Trigazis
Hi Vahric,

A very important reason that we use fedora atomic is that we
are no maintaining our special image. We use the upstream
operating system and we rely on the Fedora Project and we
contribute back to it. If we use ubuntu we would need to
maintain our special qcow image.

We also use the same containers as the Fedora Atomic project
so we have container images tested by more people.

CoreOS is kubernetes oriented, they updated Docker only
last week [1] from 1.12.6 to 17.09. You can contribute a coreos
swarm-mode driver if you want but you it will rely on CoreOS
to update the docker version.

Support for swarm-mode is only added in Pike. You can
follow what Ricardo proposed or as you said update all
your OpenStack services.

Cheers,
Spyros

[1] https://coreos.com/releases/

On 2 November 2017 at 09:34, Ricardo Rocha  wrote:
> Hi again.
>
> On Wed, Nov 1, 2017 at 9:47 PM, Vahric MUHTARYAN  wrote:
>> Hello Ricardo ,
>>
>> Thanks for your explanation and answers.
>> One more question, what is the possibility to keep using Newton (right now i 
>> have it) and use latest Magnum features like swarm mode without upgrade 
>> Openstack ? Does it possible ?
>
> I don't think this functionality is available in Magnum Newton.
>
> One option though is to upgrade only Magnum, there should be no
> dependency on more recent versions of other components - assuming you
> either have a separate control plane for Magnum or are able to split
> it.
>
> Cheers,
>   Ricardo
>
>>
>> Regards
>> VM
>>
>> On 30.10.2017 01:19, "Ricardo Rocha"  wrote:
>>
>> Hi Vahric.
>>
>> On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN  
>> wrote:
>> > Hello All ,
>> >
>> >
>> >
>> > I found some blueprint about supporting Docker Swarm Mode
>> > https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support
>> >
>> >
>> >
>> > I understood that related development is not over yet and no any 
>> Openstack
>> > version or Magnum version to test it also looks like some more thing 
>> to do.
>> >
>> > Could you pls inform when we should expect support of Docker Swarm 
>> Mode ?
>>
>> Swarm mode is already available in Pike:
>> https://docs.openstack.org/releasenotes/magnum/pike.html
>>
>> > Another question is fedora atomic is good but looks like its not 
>> up2date for
>> > docker , instead of use Fedora Atomic , why you do not use Ubuntu, or 
>> some
>> > other OS and directly install docker with requested version ?
>>
>> Atomic also has advantages (immutable, etc), it's working well for us
>> at CERN. There are also Suse and CoreOS drivers, but i'm not familiar
>> with those.
>>
>> Most pieces have moved to Atomic system containers, including all
>> kubernetes components so the versions are decouple from the Atomic
>> version.
>>
>> We've also deployed locally a patch running docker itself in a system
>> container, this will get upstream with:
>> https://bugs.launchpad.net/magnum/+bug/1727700
>>
>> With this we allow our users to deploy clusters with any docker
>> version (selectable with a label), currently up to 17.09.
>>
>> > And last, to help to over waiting items “Next working items: ”  how we 
>> could
>> > help ?
>>
>> I'll let Spyros reply to this and give you more info on the above items 
>> too.
>>
>> Regards,
>>   Ricardo
>>
>> >
>> >
>> >
>> > Regards
>> >
>> > Vahric Muhtaryan
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-02 Thread Ricardo Rocha
Hi again.

On Wed, Nov 1, 2017 at 9:47 PM, Vahric MUHTARYAN  wrote:
> Hello Ricardo ,
>
> Thanks for your explanation and answers.
> One more question, what is the possibility to keep using Newton (right now i 
> have it) and use latest Magnum features like swarm mode without upgrade 
> Openstack ? Does it possible ?

I don't think this functionality is available in Magnum Newton.

One option though is to upgrade only Magnum, there should be no
dependency on more recent versions of other components - assuming you
either have a separate control plane for Magnum or are able to split
it.

Cheers,
  Ricardo

>
> Regards
> VM
>
> On 30.10.2017 01:19, "Ricardo Rocha"  wrote:
>
> Hi Vahric.
>
> On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN  
> wrote:
> > Hello All ,
> >
> >
> >
> > I found some blueprint about supporting Docker Swarm Mode
> > https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support
> >
> >
> >
> > I understood that related development is not over yet and no any 
> Openstack
> > version or Magnum version to test it also looks like some more thing to 
> do.
> >
> > Could you pls inform when we should expect support of Docker Swarm Mode 
> ?
>
> Swarm mode is already available in Pike:
> https://docs.openstack.org/releasenotes/magnum/pike.html
>
> > Another question is fedora atomic is good but looks like its not 
> up2date for
> > docker , instead of use Fedora Atomic , why you do not use Ubuntu, or 
> some
> > other OS and directly install docker with requested version ?
>
> Atomic also has advantages (immutable, etc), it's working well for us
> at CERN. There are also Suse and CoreOS drivers, but i'm not familiar
> with those.
>
> Most pieces have moved to Atomic system containers, including all
> kubernetes components so the versions are decouple from the Atomic
> version.
>
> We've also deployed locally a patch running docker itself in a system
> container, this will get upstream with:
> https://bugs.launchpad.net/magnum/+bug/1727700
>
> With this we allow our users to deploy clusters with any docker
> version (selectable with a label), currently up to 17.09.
>
> > And last, to help to over waiting items “Next working items: ”  how we 
> could
> > help ?
>
> I'll let Spyros reply to this and give you more info on the above items 
> too.
>
> Regards,
>   Ricardo
>
> >
> >
> >
> > Regards
> >
> > Vahric Muhtaryan
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-11-01 Thread Vahric MUHTARYAN
Hello Ricardo , 

Thanks for your explanation and answers.
One more question, what is the possibility to keep using Newton (right now i 
have it) and use latest Magnum features like swarm mode without upgrade 
Openstack ? Does it possible ? 

Regards
VM

On 30.10.2017 01:19, "Ricardo Rocha"  wrote:

Hi Vahric.

On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN  
wrote:
> Hello All ,
>
>
>
> I found some blueprint about supporting Docker Swarm Mode
> https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support
>
>
>
> I understood that related development is not over yet and no any Openstack
> version or Magnum version to test it also looks like some more thing to 
do.
>
> Could you pls inform when we should expect support of Docker Swarm Mode ?

Swarm mode is already available in Pike:
https://docs.openstack.org/releasenotes/magnum/pike.html

> Another question is fedora atomic is good but looks like its not up2date 
for
> docker , instead of use Fedora Atomic , why you do not use Ubuntu, or some
> other OS and directly install docker with requested version ?

Atomic also has advantages (immutable, etc), it's working well for us
at CERN. There are also Suse and CoreOS drivers, but i'm not familiar
with those.

Most pieces have moved to Atomic system containers, including all
kubernetes components so the versions are decouple from the Atomic
version.

We've also deployed locally a patch running docker itself in a system
container, this will get upstream with:
https://bugs.launchpad.net/magnum/+bug/1727700

With this we allow our users to deploy clusters with any docker
version (selectable with a label), currently up to 17.09.

> And last, to help to over waiting items “Next working items: ”  how we 
could
> help ?

I'll let Spyros reply to this and give you more info on the above items too.

Regards,
  Ricardo

>
>
>
> Regards
>
> Vahric Muhtaryan
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Docker Swarm Mode Support

2017-10-29 Thread Ricardo Rocha
Hi Vahric.

On Fri, Oct 27, 2017 at 9:51 PM, Vahric MUHTARYAN  wrote:
> Hello All ,
>
>
>
> I found some blueprint about supporting Docker Swarm Mode
> https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support
>
>
>
> I understood that related development is not over yet and no any Openstack
> version or Magnum version to test it also looks like some more thing to do.
>
> Could you pls inform when we should expect support of Docker Swarm Mode ?

Swarm mode is already available in Pike:
https://docs.openstack.org/releasenotes/magnum/pike.html

> Another question is fedora atomic is good but looks like its not up2date for
> docker , instead of use Fedora Atomic , why you do not use Ubuntu, or some
> other OS and directly install docker with requested version ?

Atomic also has advantages (immutable, etc), it's working well for us
at CERN. There are also Suse and CoreOS drivers, but i'm not familiar
with those.

Most pieces have moved to Atomic system containers, including all
kubernetes components so the versions are decouple from the Atomic
version.

We've also deployed locally a patch running docker itself in a system
container, this will get upstream with:
https://bugs.launchpad.net/magnum/+bug/1727700

With this we allow our users to deploy clusters with any docker
version (selectable with a label), currently up to 17.09.

> And last, to help to over waiting items “Next working items: ”  how we could
> help ?

I'll let Spyros reply to this and give you more info on the above items too.

Regards,
  Ricardo

>
>
>
> Regards
>
> Vahric Muhtaryan
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Docker Swarm Mode Support

2017-10-27 Thread Vahric MUHTARYAN
Hello All , 

 

I found some blueprint about supporting Docker Swarm Mode 
https://blueprints.launchpad.net/magnum/+spec/swarm-mode-support 

 

I understood that related development is not over yet and no any Openstack 
version or Magnum version to test it also looks like some more thing to do.

 

Could you pls inform when we should expect support of Docker Swarm Mode ?

 

Another question is fedora atomic is good but looks like its not up2date for 
docker , instead of use Fedora Atomic , why you do not use Ubuntu, or some 
other OS and directly install docker with requested version ? 

 

And last, to help to over waiting items “Next working items: ”  how we could 
help ? 

 

Regards

Vahric Muhtaryan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] docker registry in minion node didn't work.

2017-10-10 Thread KiYoun Sung
Hello,
Magnum team.

I Installed Openstack newton and magnum.
I installed Magnum by source.

I want to use docker-registry and connect to "admin" account object store,
but I don't want to explosure admin password.

I created cluster-template below options.
   - coe: kubernetes
   - os: fedora_atomic
   - storage: swift
   - check "Enable Registry"

After being created cluster, docker-registry didn't run.
So, I checked the file in (source)magnum/drivers/common/templates/
fragments.org/configure-docker-registry.sh,
it sourced "/etc/sysconfig/heat-params" in magnum minion node,
but there are no variables $SWIFT_REGION, $TRUSTEE_USERNAME,
$TRUSTEE_DOMAIN_ID, $TRUST_ID,
just two variable are set TRUSTEE_USER_ID, TRUSTEE_PASSWORD.

I modified "/etc/sysconfig/registry-config.yml" in minion node by manually,
and I executed "docker run -d -p 5000:5000 --restart=always --name registry
-v /etc/sysconfig/registry-config.yml:/etc/docker/registry/config.yml
registry:2" command,
but it didnt' work.

Is it work in magnum kubernetes using fedora-atomic environment.?
How can I configure this variable?

Thank you.
Best regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] issue with admin_osc.keystone().trustee_domain_id

2017-09-22 Thread Spyros Trigazis
Hi Greg,

Can you revisit your policy configuration and try again?

See here:
http://git.openstack.org/cgit/openstack/magnum/plain/etc/magnum/policy.json?h=stable/newton

Cheers,
Spyros


On 22 September 2017 at 13:49, Waines, Greg <greg.wai...@windriver.com> wrote:
> Just another note on this ...
>
>
>
> We have
>
> · setup a ‘magnum’ domain, and
>
> · setup a ‘trustee_domain_admin’ user within that domain, and
>
> · gave that user and domain the admin role, and ß actually not
> 100% sure about this
>
> · referenced these items in magnum.conf
>
> oi.e. trustee_domain_name, trustee_domain_admin_name,
> trustee_domain_admin_password
>
>
>
> ... but still seeing the trust_domain_id issue in the admin context (see
> email below).
>
>
>
> let me know if anyone has some ideas on issue or next steps to look at,
>
> Greg.
>
>
>
>
>
> From: Greg Waines <greg.wai...@windriver.com>
> Reply-To: "openstack-dev@lists.openstack.org"
> <openstack-dev@lists.openstack.org>
> Date: Wednesday, September 20, 2017 at 12:20 PM
> To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
> Cc: "Sun, Yicheng (Jerry)" <jerry@windriver.com>
> Subject: [openstack-dev] [magnum] issue with
> admin_osc.keystone().trustee_domain_id
>
>
>
> We are in the process of integrating MAGNUM into our OpenStack distribution.
>
> We are working with NEWTON version of MAGNUM.
>
> We have the MAGNUM processes up and running and configured.
>
>
>
> However we are seeing the following error (see stack trace below) on
> virtually all MAGNUM CLI calls.
>
>
>
> The code where the stack trace is triggered:
>
> def add_policy_attributes(target):
>
> """Adds extra information for policy enforcement to raw target object"""
>
> admin_context = context.make_admin_context()
>
> admin_osc = clients.OpenStackClients(admin_context)
>
> trustee_domain_id = admin_osc.keystone().trustee_domain_id
>
> target['trustee_domain_id'] = trustee_domain_id
>
> return target
>
>
>
> ( NOTE: that this code was introduced upstream as part of a fix for
> CVE-2016-7404:
>
> https://github.com/openstack/magnum/commit/2d4e617a529ea12ab5330f12631f44172a623a14
> )
>
>
>
> Stack Trace:
>
> File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in
> callfunction
>
> result = f(self, *args, **kwargs)
>
>
>
>   File "", line 2, in get_all
>
>
>
>   File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 130,
> in wrapper
>
> exc=exception.PolicyNotAuthorized, action=action)
>
>
>
>   File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 97,
> in enforce
>
> #add_policy_attributes(target)
>
>
>
>   File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 106,
> in add_policy_attributes
>
> trustee_domain_id = admin_osc.keystone().trustee_domain_id
>
>
>
>   File "/usr/lib/python2.7/site-packages/magnum/common/keystone.py", line
> 237, in trustee_domain_id
>
> self.domain_admin_session
>
>
>
>   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py",
> line 136, in get_access
>
> self.auth_ref = self.get_auth_ref(session)
>
>
>
>   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py",
> line 167, in get_auth_ref
>
> authenticated=False, log=False, **rkwargs)
>
>
>
>   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> 681, in post
>
> return self.request(url, 'POST', **kwargs)
>
>
>
>   File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101,
> in inner
>
> return wrapped(*args, **kwargs)
>
>
>
>   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line
> 570, in request
>
> raise exceptions.from_response(resp, method, url)
>
>
>
> NotFound: The resource could not be found. (HTTP 404)
>
>
>
>
>
> Any ideas on what our issue could be ?
>
> Or next steps to investigate ?
>
>
>
> thanks in advance,
>
> Greg.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] issue with admin_osc.keystone().trustee_domain_id

2017-09-22 Thread Waines, Greg
Just another note on this ...

We have

· setup a ‘magnum’ domain, and

· setup a ‘trustee_domain_admin’ user within that domain, and

· gave that user and domain the admin role, and <-- actually not 
100% sure about this

· referenced these items in magnum.conf

oi.e. trustee_domain_name, trustee_domain_admin_name, 
trustee_domain_admin_password

... but still seeing the trust_domain_id issue in the admin context (see email 
below).

let me know if anyone has some ideas on issue or next steps to look at,
Greg.


From: Greg Waines <greg.wai...@windriver.com>
Reply-To: "openstack-dev@lists.openstack.org" 
<openstack-dev@lists.openstack.org>
Date: Wednesday, September 20, 2017 at 12:20 PM
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Cc: "Sun, Yicheng (Jerry)" <jerry@windriver.com>
Subject: [openstack-dev] [magnum] issue with 
admin_osc.keystone().trustee_domain_id

We are in the process of integrating MAGNUM into our OpenStack distribution.
We are working with NEWTON version of MAGNUM.
We have the MAGNUM processes up and running and configured.

However we are seeing the following error (see stack trace below) on virtually 
all MAGNUM CLI calls.

The code where the stack trace is triggered:
def add_policy_attributes(target):
"""Adds extra information for policy enforcement to raw target object"""
admin_context = context.make_admin_context()
admin_osc = clients.OpenStackClients(admin_context)
trustee_domain_id = admin_osc.keystone().trustee_domain_id
target['trustee_domain_id'] = trustee_domain_id
return target

( NOTE: that this code was introduced upstream as part of a fix for 
CVE-2016-7404:
 
https://github.com/openstack/magnum/commit/2d4e617a529ea12ab5330f12631f44172a623a14
 )

Stack Trace:
File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in 
callfunction
result = f(self, *args, **kwargs)

  File "", line 2, in get_all

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 130, in 
wrapper
exc=exception.PolicyNotAuthorized, action=action)

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 97, in 
enforce
#add_policy_attributes(target)

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 106, in 
add_policy_attributes
trustee_domain_id = admin_osc.keystone().trustee_domain_id

  File "/usr/lib/python2.7/site-packages/magnum/common/keystone.py", line 237, 
in trustee_domain_id
self.domain_admin_session

  File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 
136, in get_access
self.auth_ref = self.get_auth_ref(session)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", 
line 167, in get_auth_ref
authenticated=False, log=False, **rkwargs)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 681, 
in post
return self.request(url, 'POST', **kwargs)

  File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in 
inner
return wrapped(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 570, 
in request
raise exceptions.from_response(resp, method, url)

NotFound: The resource could not be found. (HTTP 404)


Any ideas on what our issue could be ?
Or next steps to investigate ?

thanks in advance,
Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] issue with admin_osc.keystone().trustee_domain_id

2017-09-20 Thread Waines, Greg
We are in the process of integrating MAGNUM into our OpenStack distribution.
We are working with NEWTON version of MAGNUM.
We have the MAGNUM processes up and running and configured.

However we are seeing the following error (see stack trace below) on virtually 
all MAGNUM CLI calls.

The code where the stack trace is triggered:
def add_policy_attributes(target):
"""Adds extra information for policy enforcement to raw target object"""
admin_context = context.make_admin_context()
admin_osc = clients.OpenStackClients(admin_context)
trustee_domain_id = admin_osc.keystone().trustee_domain_id
target['trustee_domain_id'] = trustee_domain_id
return target

( NOTE: that this code was introduced upstream as part of a fix for 
CVE-2016-7404:
 
https://github.com/openstack/magnum/commit/2d4e617a529ea12ab5330f12631f44172a623a14
 )

Stack Trace:
File "/usr/lib/python2.7/site-packages/wsmeext/pecan.py", line 84, in 
callfunction
result = f(self, *args, **kwargs)

  File "", line 2, in get_all

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 130, in 
wrapper
exc=exception.PolicyNotAuthorized, action=action)

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 97, in 
enforce
#add_policy_attributes(target)

  File "/usr/lib/python2.7/site-packages/magnum/common/policy.py", line 106, in 
add_policy_attributes
trustee_domain_id = admin_osc.keystone().trustee_domain_id

  File "/usr/lib/python2.7/site-packages/magnum/common/keystone.py", line 237, 
in trustee_domain_id
self.domain_admin_session

  File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 
136, in get_access
self.auth_ref = self.get_auth_ref(session)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", 
line 167, in get_auth_ref
authenticated=False, log=False, **rkwargs)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 681, 
in post
return self.request(url, 'POST', **kwargs)

  File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in 
inner
return wrapped(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 570, 
in request
raise exceptions.from_response(resp, method, url)

NotFound: The resource could not be found. (HTTP 404)


Any ideas on what our issue could be ?
Or next steps to investigate ?

thanks in advance,
Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Weekly meetings

2017-08-28 Thread Spyros Trigazis
Hello,

As discussed in last week's meeting [0], this week and next we will
discuss plans about Queens and review blueprints. So, if you want
to add discussion items please bring them up tomorrow or next week in
our weekly meeting. If for any reason, you can't attend you can start
a thread in the mailing list.

Also this week, we will go through our blueprint list and clean it up from
obsolete blueprints.

Finally, I would like to ask you to review this blueprint [1] about cluster
federation, add your ideas and comments in the review.

Cheers,
Spyros

[0] 
http://eavesdrop.openstack.org/meetings/containers/2017/containers.2017-08-22-16.00.html
[1] https://review.openstack.org/#/c/489609/

On 22 August 2017 at 17:47, Spyros Trigazis  wrote:
> Hello,
>
> Recently we decided to have bi-weekly meetings. Starting from today we will
> have weekly meetings again.
>
> From now on, we will have our meeting every Tuesday at 1600 UTC
> in #openstack-meeting-alt . For today, that is in 13 minutes.
>
> Cheers,
> Spyros

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Weekly meetings

2017-08-22 Thread Spyros Trigazis
Hello,

Recently we decided to have bi-weekly meetings. Starting from today we will
have weekly meetings again.

From now on, we will have our meeting every Tuesday at 1600 UTC
in #openstack-meeting-alt . For today, that is in 13 minutes.

Cheers,
Spyros

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] PTL Candidacy for Queens

2017-08-04 Thread Spyros Trigazis
Hello!

I would like to nominate myself as PTL for the Magnum project for the
Queens cycle.

I have been consistently contributing to Magnum since February 2016
and I am a core reviewer since August 2016. Since then, I have
contributed to significant features like cluster drivers, add Magnum
tests to Rally (I'm core reviewer to rally to help the rally team with
Magnum related reviews), wrote Magnum's installation tutorial and
served as docs liaison for the project. My latest contribution is the
swarm-mode cluster driver. I have been the release liaison for Magnum
for Pike and I have contributed a lot in Magnum's CI jobs (adding
multi-node, DIB and new driver jobs, I haven't managed to add Magnum
in CentOS CI yet :( but we have granted access). Finally, I have been
working closely with other projects consumed by Magnum like Heat and
Fedora Atomic.

My plans for Queens are to contribute and guide other contributors to:
* Finalize and stabilize the very much wanted feature for cluster
  upgrades.
* Add functionality to heal clusters from a failed state.
* Add functionality for federated Kubernetes clusters and potentially
  other cluster types.
* Add Kuryr as a network driver.

Thanks for considering me,
Spyros Trigazis

[0] https://review.openstack.org/490893

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] spec for cluster federation

2017-08-03 Thread Ricardo Rocha
Hi.

We've recently started looking at federating kubernetes clusters,
using some of our internal Magnum clusters and others deployed in
external clouds. With kubernetes 1.7 most of the functionality we need
is already available.

Looking forward we submitted a spec to integrate this into Magnum:
https://review.openstack.org/#/c/489609/

We will work on this once it gets approved, but please review the spec
and provide feedback.

Regards,
  Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-08-01 Thread Mark Goddard
I'm going to assume we're talking about a typical environment with nova and
neutron here. There are two separate boots to consider:

1. The IPA deployment ramdisk, which writes the user's image to the node's
disk.
2. The user's image.

Prior to 1, ironic communicates with neutron to apply additional DHCP
options required for network booting to the DHCP response issued to the
deployment ramdisk during provisioning.

When the user's image boots (assuming the node is configured for localboot
rather than netboot), it is not really any different than a typical nova
VM. The image, often using a service such as cloud-init, should arrange for
the appropriate interface(s) to start a DHCP client. Neutron will be
configured to provide a DHCP response.

The missing piece that enables the use of multiple network interfaces is
physical network awareness [1]. This feature will be available in the Pike
release, and allows an operator to tag ironic ports with the physical
network to which they are attached. With this information, ironic is able
to correctly map the virtual neutron ports passed by the user via nova to
the physical ironic ports. Previously, this mapping was essentially random
and would lead to the node's DHCP requests landing at the wrong neutron
DHCP server instance in some cases.

[1] https://bugs.launchpad.net/ironic/+bug/1666009

Mark

On 28 July 2017 at 12:29, Waines, Greg <greg.wai...@windriver.com> wrote:

> Thanks for the info Mark.
>
>
>
> A dumb question ... can’t seem to find the answer in ironic documentation
> ...
>
> · I understand how the interface over which the bare metal
> instance network boots/installs gets configured ...
> i.e. thru DHCP response/configuration from the ironic conductor dhcp/boot
> server
>
> · but how would other interfaces, get configured on the bare
> metal server by ironic ?
>
>
>
>
>
> Greg.
>
>
>
> *From: *Mark Goddard <m...@stackhpc.com>
> *Reply-To: *"openstack-dev@lists.openstack.org" <openstack-dev@lists.
> openstack.org>
> *Date: *Friday, July 28, 2017 at 5:08 AM
> *To: *"openstack-dev@lists.openstack.org" <openstack-dev@lists.
> openstack.org>
> *Subject: *Re: [openstack-dev] [magnum] Interface configuration and
> assumptions for masters/minions launched by magnum
>
>
>
> Hi Greg,
>
>
>
> Magnum clusters currently support using only a single network for all
> communication. See the heat templates[1][2] in the drivers.
>
> .
>
> On the bare metal side, currently ironic effectively supports using only a
> single network interface due to a lack of support for physical network
> awareness. This feature[3] will be added in the Pike release, at which
> point it will be possible to create bare metal instances with multiple
> ports.
>
>
>
> [1] https://github.com/openstack/magnum/tree/master/
> magnum/drivers/k8s_fedora_atomic_v1/templates
>
> [2] https://github.com/openstack/magnum/tree/master/magnum/
> drivers/k8s_coreos_v1/templates
>
> [3] https://bugs.launchpad.net/ironic/+bug/1666009
>
>
>
> Mark
>
>
>
> On 17 July 2017 at 14:27, Waines, Greg <greg.wai...@windriver.com> wrote:
>
> When MAGNUM launches a VM or Ironic instance for a COE master or minion
> node, with the COE Image,
>
> What is the interface configuration and assumptions for these nodes ?
>
> e.g.
>
> - only a single interface ?
>
> - master and minion communication over that interface ?
>
> - communication to Docker Registry or public Docker Hub over that
> interface ?
>
> - public communications for containers over that interface ?
>
>
>
> Greg.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-07-28 Thread Waines, Greg
Thanks for the info Mark.

A dumb question ... can’t seem to find the answer in ironic documentation ...

· I understand how the interface over which the bare metal instance 
network boots/installs gets configured ...
i.e. thru DHCP response/configuration from the ironic conductor dhcp/boot server

· but how would other interfaces, get configured on the bare metal 
server by ironic ?


Greg.

From: Mark Goddard <m...@stackhpc.com>
Reply-To: "openstack-dev@lists.openstack.org" 
<openstack-dev@lists.openstack.org>
Date: Friday, July 28, 2017 at 5:08 AM
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Interface configuration and assumptions 
for masters/minions launched by magnum

Hi Greg,

Magnum clusters currently support using only a single network for all 
communication. See the heat templates[1][2] in the drivers.
.
On the bare metal side, currently ironic effectively supports using only a 
single network interface due to a lack of support for physical network 
awareness. This feature[3] will be added in the Pike release, at which point it 
will be possible to create bare metal instances with multiple ports.

[1] 
https://github.com/openstack/magnum/tree/master/magnum/drivers/k8s_fedora_atomic_v1/templates
[2] 
https://github.com/openstack/magnum/tree/master/magnum/drivers/k8s_coreos_v1/templates
[3] https://bugs.launchpad.net/ironic/+bug/1666009

Mark

On 17 July 2017 at 14:27, Waines, Greg 
<greg.wai...@windriver.com<mailto:greg.wai...@windriver.com>> wrote:
When MAGNUM launches a VM or Ironic instance for a COE master or minion node, 
with the COE Image,
What is the interface configuration and assumptions for these nodes ?
e.g.
- only a single interface ?
- master and minion communication over that interface ?
- communication to Docker Registry or public Docker Hub over that interface ?
- public communications for containers over that interface ?

Greg.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-07-28 Thread Mark Goddard
Hi Greg,

Magnum clusters currently support using only a single network for all
communication. See the heat templates[1][2] in the drivers.
.
On the bare metal side, currently ironic effectively supports using only a
single network interface due to a lack of support for physical network
awareness. This feature[3] will be added in the Pike release, at which
point it will be possible to create bare metal instances with multiple
ports.

[1]
https://github.com/openstack/magnum/tree/master/magnum/drivers/k8s_fedora_atomic_v1/templates
[2]
https://github.com/openstack/magnum/tree/master/magnum/drivers/k8s_coreos_v1/templates
[3] https://bugs.launchpad.net/ironic/+bug/1666009

Mark

On 17 July 2017 at 14:27, Waines, Greg  wrote:

> When MAGNUM launches a VM or Ironic instance for a COE master or minion
> node, with the COE Image,
>
> What is the interface configuration and assumptions for these nodes ?
>
> e.g.
>
> - only a single interface ?
>
> - master and minion communication over that interface ?
>
> - communication to Docker Registry or public Docker Hub over that
> interface ?
>
> - public communications for containers over that interface ?
>
>
>
> Greg.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Architecture support for either VM or Ironic instance as Containers' Host ?

2017-07-20 Thread Mark Goddard
Hi Greg,

You're correct - magnum features support for running on top of VMs or
baremetal. Currently baremetal is supported for kubernetes on Fedora core
only[1]. There is a cluster template parameter 'server_type', which should
be set to 'BM' for baremetal clusters.

In terms of how this works within magnum, each magnum driver advertises one
or more (OS, COE, server_type) tuples that it supports via its 'provides'
property. There is no 'container-host-driver API' - magnum drivers are
largely just a collection of heat templates and a little python glue.

Due to historic and current limitations with ironic (mostly around
networking[2] and block storage support), drivers typically support either
VM or BM. Ironic networking has improved over the last few releases, and it
is becoming feasible to support baremetal using the standard VM drivers. I
think there is a general desire within the project to only support one set
of drivers and remove the maintenance burden.

In terms of your use case, I think that your proprietary bare metal service
would likely not work with any existing drivers. If it could be integrated
with heat, there there is a chance that you could implement a magnum driver
and reuse some of the shared magnum code for configuring and running COEs.

[1]
https://github.com/openstack/magnum/tree/master/magnum/drivers/k8s_fedora_ironic_v1
[2] https://bugs.launchpad.net/magnum/+bug/1544195

On 17 July 2017 at 14:18, Waines, Greg  wrote:

> I believe the MAGNUM architecture supports using either a VM Instance or
> an Ironic Instance as the Host for the COE’s masters and minions.
>
>
>
> How is this done / abstracted within the MAGNUM Architecture ?
>
> i.e. is there a ‘container-host-driver API’ that is defined; and
> implemented for both VM and Ironic ?
>
> ( Feel free to just refer me to a URL that describes this. )
>
>
>
> The reason I ask is that I have a proprietary bare metal service that I
> would like to have MAGNUM run on top of.
>
>
>
> Greg.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Interface configuration and assumptions for masters/minions launched by magnum

2017-07-17 Thread Waines, Greg
When MAGNUM launches a VM or Ironic instance for a COE master or minion node, 
with the COE Image,
What is the interface configuration and assumptions for these nodes ?
e.g.
- only a single interface ?
- master and minion communication over that interface ?
- communication to Docker Registry or public Docker Hub over that interface ?
- public communications for containers over that interface ?

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Architecture support for either VM or Ironic instance as Containers' Host ?

2017-07-17 Thread Waines, Greg
I believe the MAGNUM architecture supports using either a VM Instance or an 
Ironic Instance as the Host for the COE’s masters and minions.

How is this done / abstracted within the MAGNUM Architecture ?
i.e. is there a ‘container-host-driver API’ that is defined; and implemented 
for both VM and Ironic ?
( Feel free to just refer me to a URL that describes this. )

The reason I ask is that I have a proprietary bare metal service that I would 
like to have MAGNUM run on top of.

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread KiYoun Sung
Hello, Spyros/
Thank you for your reply.

I executed "kubectl create" command in my openstack controller node.
I downloaded kubectl binary, it's version is 2.5.1.

Below are my steps.
1) install openstack newton by fuel 10.0
2) install magnum by source (master branch) in controller node
3) install magnum-client by source in controller node
4) execute magnum cluster-template-create for kubernetes
5) execute magnum cluster-create with kubernetes-cluster-template
6) download kubectl and connect my kubernetes cluster
7) execute kubectl get nodes, get pods => is normal
7) finally, kubectl create -f nginx.yaml

But 7th step was failed.

Best regards.

2017-05-17 20:58 GMT+09:00 Spyros Trigazis :

>
>
> On 17 May 2017 at 06:25, KiYoun Sung  wrote:
>
>> Hello,
>> Magnum team.
>>
>> I Installed Openstack newton and magnum.
>> I installed Magnum by source(master branch).
>>
>> I have two questions.
>>
>> 1.
>> After installation,
>> I created kubernetes cluster and it's CREATE_COMPLETE,
>> and I want to create kubernetes pod.
>>
>> My create script is below.
>> --
>> apiVersion: v1
>> kind: Pod
>> metadata:
>>   name: nginx
>>   labels:
>> app: nginx
>> spec:
>>   containers:
>>   - name: nginx
>> image: nginx
>> ports:
>> - containerPort: 80
>> --
>>
>> I tried "kubectl create -f nginx.yaml"
>> But, error has occured.
>>
>> Error message is below.
>> error validating "pod-nginx-with-label.yaml": error validating data:
>> unexpected type: object; if you choose to ignore these errors, turn
>> validation off with --validate=false
>>
>> Why did this error occur?
>>
>
> This is not related to magnum, it is related to your client. From where do
> you execute the
> kubectl create command? You computer? Some vm with a distributed file
> system?
>
>
>>
>> 2.
>> I want to access this kubernetes cluster service(like nginx) above the
>> Openstack magnum environment from outside world.
>>
>> I refer to this guide(https://docs.openstack.o
>> rg/developer/magnum/dev/kubernetes-load-balancer.html#how-it-works), but
>> it didn't work.
>>
>> Openstack: newton
>> Magnum: 4.1.1 (master branch)
>>
>> How can I do?
>> Do I must install Lbaasv2?
>>
>
> You need lbaas V2 with octavia preferably. Not sure what is the
> recommended way to install.
>
>
>>
>> Thank you.
>> Best regards.
>>
>
> Cheers,
> Spyros
>
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread Spyros Trigazis
On 17 May 2017 at 13:58, Spyros Trigazis  wrote:

>
>
> On 17 May 2017 at 06:25, KiYoun Sung  wrote:
>
>> Hello,
>> Magnum team.
>>
>> I Installed Openstack newton and magnum.
>> I installed Magnum by source(master branch).
>>
>> I have two questions.
>>
>> 1.
>> After installation,
>> I created kubernetes cluster and it's CREATE_COMPLETE,
>> and I want to create kubernetes pod.
>>
>> My create script is below.
>> --
>> apiVersion: v1
>> kind: Pod
>> metadata:
>>   name: nginx
>>   labels:
>> app: nginx
>> spec:
>>   containers:
>>   - name: nginx
>> image: nginx
>> ports:
>> - containerPort: 80
>> --
>>
>> I tried "kubectl create -f nginx.yaml"
>> But, error has occured.
>>
>> Error message is below.
>> error validating "pod-nginx-with-label.yaml": error validating data:
>> unexpected type: object; if you choose to ignore these errors, turn
>> validation off with --validate=false
>>
>> Why did this error occur?
>>
>
> This is not related to magnum, it is related to your client. From where do
> you execute the
> kubectl create command? You computer? Some vm with a distributed file
> system?
>
>
>>
>> 2.
>> I want to access this kubernetes cluster service(like nginx) above the
>> Openstack magnum environment from outside world.
>>
>> I refer to this guide(https://docs.openstack.o
>> rg/developer/magnum/dev/kubernetes-load-balancer.html#how-it-works), but
>> it didn't work.
>>
>> Openstack: newton
>> Magnum: 4.1.1 (master branch)
>>
>> How can I do?
>> Do I must install Lbaasv2?
>>
>
> You need lbaas V2 with octavia preferably. Not sure what is the
> recommended way to install.
>

Have a look here:
https://docs.openstack.org/draft/networking-guide/config-lbaas.html

Cheers,
Spyros


>
>
>>
>> Thank you.
>> Best regards.
>>
>
> Cheers,
> Spyros
>
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-17 Thread Spyros Trigazis
On 17 May 2017 at 06:25, KiYoun Sung  wrote:

> Hello,
> Magnum team.
>
> I Installed Openstack newton and magnum.
> I installed Magnum by source(master branch).
>
> I have two questions.
>
> 1.
> After installation,
> I created kubernetes cluster and it's CREATE_COMPLETE,
> and I want to create kubernetes pod.
>
> My create script is below.
> --
> apiVersion: v1
> kind: Pod
> metadata:
>   name: nginx
>   labels:
> app: nginx
> spec:
>   containers:
>   - name: nginx
> image: nginx
> ports:
> - containerPort: 80
> --
>
> I tried "kubectl create -f nginx.yaml"
> But, error has occured.
>
> Error message is below.
> error validating "pod-nginx-with-label.yaml": error validating data:
> unexpected type: object; if you choose to ignore these errors, turn
> validation off with --validate=false
>
> Why did this error occur?
>

This is not related to magnum, it is related to your client. From where do
you execute the
kubectl create command? You computer? Some vm with a distributed file
system?


>
> 2.
> I want to access this kubernetes cluster service(like nginx) above the
> Openstack magnum environment from outside world.
>
> I refer to this guide(https://docs.openstack.org/developer/magnum/dev/
> kubernetes-load-balancer.html#how-it-works), but it didn't work.
>
> Openstack: newton
> Magnum: 4.1.1 (master branch)
>
> How can I do?
> Do I must install Lbaasv2?
>

You need lbaas V2 with octavia preferably. Not sure what is the recommended
way to install.


>
> Thank you.
> Best regards.
>

Cheers,
Spyros


>
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-16 Thread KiYoun Sung
Hello,
Magnum team.

I Installed Openstack newton and magnum.
I installed Magnum by source(master branch).

I have two questions.

1.
After installation,
I created kubernetes cluster and it's CREATE_COMPLETE,
and I want to create kubernetes pod.

My create script is below.
--
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
app: nginx
spec:
  containers:
  - name: nginx
image: nginx
ports:
- containerPort: 80
--

I tried "kubectl create -f nginx.yaml"
But, error has occured.

Error message is below.
error validating "pod-nginx-with-label.yaml": error validating data:
unexpected type: object; if you choose to ignore these errors, turn
validation off with --validate=false

Why did this error occur?

2.
I want to access this kubernetes cluster service(like nginx) above the
Openstack magnum environment from outside world.

I refer to this guide(
https://docs.openstack.org/developer/magnum/dev/kubernetes-load-balancer.html#how-it-works),
but it didn't work.

Openstack: newton
Magnum: 4.1.1 (master branch)

How can I do?
Do I must install Lbaasv2?

Thank you.
Best regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] magnum cluster-create for kubernetes-template was failed.

2017-05-12 Thread Mark Goddard
Hi,

I also hit the loopingcall error while running magnum 4.1.1 (ocata). It is
tracked by this bug: https://bugs.launchpad.net/magnum/+bug/1666790. I
cherry picked the fix to ocata locally, but this needs to be done upstream
as well.

I think that the heat stack create timeout is unrelated to that issue
though. Try the following to debug the issue:
- Check the cluster's heat stack and its component resources.
- If created, SSH to the master and slave nodes, checking systemd services
are up and cloud-init succeeded.

Regards,
Mark

On 12 May 2017 at 05:57, KiYoun Sung  wrote:

> Hello,
> Magnum Team.
>
> I installed magnum on Openstack Ocata(by fuel 11.0).
> I referred to this guide.(https://docs.openstack.
> org/project-install-guide/container-infrastructure-managemen
> t/ocata/install.html)
>
> Below is my installation information.
> root@controller:~# dpkg -l | grep magnum
> magnum-api  4.1.0-0ubuntu1~cloud0
>  all  OpenStack containers as a service
> magnum-common   4.1.0-0ubuntu1~cloud0
>  all  OpenStack containers as a service - API server
> magnum-conductor4.1.0-0ubuntu1~cloud0
>  all  OpenStack containers as a service - conductor
> python-magnum   4.1.0-0ubuntu1~cloud0
>  all  OpenStack containers as a service - Python library
> python-magnumclient 2.5.0-0ubuntu1~cloud0
>  all  client library for Magnum API - Python 2.x
>
> After installation,
> I created cluster-template for kubernetes like this.
> (magnum cluster-template-create --name k8s-cluster-template \ --image
> fedora-atomic-latest \ --keypair testkey \ --external-network
> admin_floating_net \ --dns-nameserver 8.8.8.8 \ --flavor m1.small \
> --docker-volume-size 5 \ --network-driver flannel \ --coe kubernetes )
>
> and I create cluster,
> but "magnum clutser-create' command was failed.
> (magnum cluster-create --name k8s-cluster \ --cluster-template
> k8s-cluster-template \ --node-count 1 \ --timeout 10 )
>
> After 10 minutes(option "--timeout 10"),
> creation was failed, and the status is "CREATE_FAILED"
>
> I executed "openstack server list" command,
> there is a only kube-master instance.
> (root@controller:~# openstack server list
> +--+
> ---++---
> -+--+
> | ID   | Name
>  | Status | Networks   | Image Name   |
> +--+
> ---++---
> -+--+
> | bf9c5097-74fd-4457-a8a2-4feae76d4111 | k8-i27fw72w5t-0-i6lg6mzpzrl6-kube-
>| ACTIVE | private=10.0.0.9, 172.16.1.135 | fedora-atomic-latest |
> |  | master-ekjrg2v6ztss
> |||  |
> +--+
> ---+++--+
>
> )
>
> I think kube-master instance create is successful.
> I can connect that instance
> and docker container was running normally.
>
> Why this command was failed?
>
> Here is my /var/log/magnum/magnum-conductor.log and /var/log/nova-all.log.
> magnum-conductor.log have a ERROR.
> ===
> 2017-05-12 04:05:00.684 756 ERROR magnum.common.keystone
> [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Keystone API
> connection failed: no password, trust_id or token found.
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception
> [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Exception in string
> format operation, kwargs: {'code': 500}
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception Traceback (most
> recent call last):
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception   File
> "/usr/lib/python2.7/dist-packages/magnum/common/exception.py", line 92,
> in __init__
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception self.message
> = self.message % kwargs
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception KeyError:
> u'client'
> 2017-05-12 04:05:00.686 756 ERROR magnum.common.exception
> 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall
> [req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Fixed interval
> looping call 'magnum.service.periodic.ClusterUpdateJob.update_status'
> failed
> 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall Traceback (most
> recent call last):
> 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall   File
> "/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137,
> in _run_loop
> 2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall result =
> func(*self.args, **self.kw)
> 2017-05-12 04:05:00.687 756 ERROR 

[openstack-dev] [magnum] magnum cluster-create for kubernetes-template was failed.

2017-05-11 Thread KiYoun Sung
Hello,
Magnum Team.

I installed magnum on Openstack Ocata(by fuel 11.0).
I referred to this guide.(https://docs.openstack.org/project-install-guide/
container-infrastructure-management/ocata/install.html)

Below is my installation information.
root@controller:~# dpkg -l | grep magnum
magnum-api  4.1.0-0ubuntu1~cloud0
   all  OpenStack containers as a service
magnum-common   4.1.0-0ubuntu1~cloud0
   all  OpenStack containers as a service - API server
magnum-conductor4.1.0-0ubuntu1~cloud0
   all  OpenStack containers as a service - conductor
python-magnum   4.1.0-0ubuntu1~cloud0
   all  OpenStack containers as a service - Python library
python-magnumclient 2.5.0-0ubuntu1~cloud0
   all  client library for Magnum API - Python 2.x

After installation,
I created cluster-template for kubernetes like this.
(magnum cluster-template-create --name k8s-cluster-template \ --image
fedora-atomic-latest \ --keypair testkey \ --external-network
admin_floating_net \ --dns-nameserver 8.8.8.8 \ --flavor m1.small \
--docker-volume-size 5 \ --network-driver flannel \ --coe kubernetes )

and I create cluster,
but "magnum clutser-create' command was failed.
(magnum cluster-create --name k8s-cluster \ --cluster-template
k8s-cluster-template \ --node-count 1 \ --timeout 10 )

After 10 minutes(option "--timeout 10"),
creation was failed, and the status is "CREATE_FAILED"

I executed "openstack server list" command,
there is a only kube-master instance.
(root@controller:~# openstack server list
+--+
---++---
-+--+
| ID   | Name
   | Status | Networks   | Image Name   |
+--+
---++---
-+--+
| bf9c5097-74fd-4457-a8a2-4feae76d4111 | k8-i27fw72w5t-0-i6lg6mzpzrl6-kube-
   | ACTIVE | private=10.0.0.9, 172.16.1.135 | fedora-atomic-latest |
|  | master-ekjrg2v6ztss
|||  |
+--+
---+++--+

)

I think kube-master instance create is successful.
I can connect that instance
and docker container was running normally.

Why this command was failed?

Here is my /var/log/magnum/magnum-conductor.log and /var/log/nova-all.log.
magnum-conductor.log have a ERROR.
===
2017-05-12 04:05:00.684 756 ERROR magnum.common.keystone
[req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Keystone API
connection failed: no password, trust_id or token found.
2017-05-12 04:05:00.686 756 ERROR magnum.common.exception
[req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Exception in string
format operation, kwargs: {'code': 500}
2017-05-12 04:05:00.686 756 ERROR magnum.common.exception Traceback (most
recent call last):
2017-05-12 04:05:00.686 756 ERROR magnum.common.exception   File
"/usr/lib/python2.7/dist-packages/magnum/common/exception.py", line 92, in
__init__
2017-05-12 04:05:00.686 756 ERROR magnum.common.exception self.message
= self.message % kwargs
2017-05-12 04:05:00.686 756 ERROR magnum.common.exception KeyError:
u'client'
2017-05-12 04:05:00.686 756 ERROR magnum.common.exception
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall
[req-e2d4ea12-ec7a-4865-9eda-d272cc43a827 - - - - -] Fixed interval looping
call 'magnum.service.periodic.ClusterUpdateJob.update_status' failed
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall Traceback (most
recent call last):
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall   File
"/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 137,
in _run_loop
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall result =
func(*self.args, **self.kw)
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall   File
"/usr/lib/python2.7/dist-packages/magnum/service/periodic.py", line 71, in
update_status
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall
cdriver.update_cluster_status(self.ctx, self.cluster)
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall   File
"/usr/lib/python2.7/dist-packages/magnum/drivers/heat/driver.py", line 80,
in update_cluster_status
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall
poller.poll_and_check()
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall   File
"/usr/lib/python2.7/dist-packages/magnum/drivers/heat/driver.py", line 169,
in poll_and_check
2017-05-12 04:05:00.687 756 ERROR oslo.service.loopingcall stack =
self.openstack_client.heat().stacks.get(
2017-05-12 04:05:00.687 756 ERROR 

Re: [openstack-dev] [magnum][containers] Size of userdata in drivers

2017-05-04 Thread Ricardo Rocha
Hi Kevin.

We've hit this locally in the past, and adding core-dns i see the
sample for kubernetes atomic.

Spyros is dropping some fragments that are not needed to temporarily
get around the issue. Is there any trick in Heat we can use? zipping
the fragments should give some gain, is this possible?

Cheers,
  Ricardo

On Mon, Apr 24, 2017 at 11:56 PM, Kevin Lefevre  wrote:
> Hi, I recently stumbled on this bug 
> https://bugs.launchpad.net/magnum/+bug/1680900 in which Spyros says we are 
> about to hit the 64k limit for Nova user-data.
>
> One way to prevent this is to reduce the size of software config. But there 
> is still many things to be added to templates.
>
> I’m talking only about Kubernetes for now :
>
> I know some other Kubernetes projects (on AWS for example with kube-aws) are 
> using object storage (AWS S3) to bypass the limit of AWS Cloudformation and 
> store stack-templates and user-data but I don’t think it is possible on 
> OpenStack with Nova/Swift
>
> Since we rely on an internet connection anyway (except when running local 
> copy of hypercube image) for a majority of deployment when pulling hypercube 
> and other Kubernetes components, maybe we could rely on upstream for some 
> user-data and save some space.
>
> A lot of driver maintenance include syncing Kubernetes manifest from upstream 
> changes, bumping version, this is fine for the core components for now (api, 
> proxy, controller, scheduler) but is bit more tricky when we start adding the 
> addons (which are bigger and take a lot more space).
>
> Kubernetes official salt base deployment already provides templating (sed) 
> for commons addons, e.g.:
>
> https://github.com/kubernetes/kubernetes/blob/release-1.6/cluster/addons/dns/kubedns-controller.yaml.sed
>
> These template are already versioned and maintained by upstream. Depending on 
> the Kubernetes branches used we could get directly the right addons from 
> upstream. This prevents errors and having to sync and upgrade the addons.
>
> This is just a thought and of course there are downsides to this and maybe it 
> goes against the project goal because we required internet access but we 
> could for example offer a way to pull addons or other config manifest from 
> local object storage.
>
> I know this also causes problems for idempotence and gate testing because we 
> cannot vouch for upstream changes but in theory Kubernetes releases and 
> addons are already tested against a specific version by their CI.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-27 Thread Adrian Otto
> On Mar 22, 2017, at 5:48 AM, Ricardo Rocha  wrote:
> 
> Hi.
> 
> One simplification would be:
> openstack coe create/list/show/config/update
> openstack coe template create/list/show/update
> openstack coe ca show/sign

I like Ricardo’s suggestion above. I think we should decide between the option 
above (Option 1), and this one (Option 2):

openstack coe cluster create/list/show/config/update
openstack coe cluster template create/list/show/update
openstack coe ca show/sign

Both options are clearly unique to magnum, and are unlikely to cause any future 
collisions with other projects. If you have a preference, please express it so 
we can consider your input and proceed with the implementation. I have a slight 
preference for Option 2 because it more closely reflects how I think about what 
the commands do, and follows the noun/verb pattern correctly. Please share your 
feedback.

Thanks,

Adrian

> This covers all the required commands and is a bit less verbose. The
> cluster word is too generic and probably adds no useful info.
> 
> Whatever it is, kerberos support for the magnum client is very much
> needed and welcome! :)
> 
> Cheers,
>  Ricardo
> 
> On Tue, Mar 21, 2017 at 2:54 PM, Spyros Trigazis  wrote:
>> IMO, coe is a little confusing. It is a term used by people related somehow
>> to the magnum community. When I describe to users how to use magnum,
>> I spent a few moments explaining what we call coe.
>> 
>> I prefer one of the following:
>> * openstack magnum cluster create|delete|...
>> * openstack mcluster create|delete|...
>> * both the above
>> 
>> It is very intuitive for users because, they will be using an openstack
>> cloud
>> and they will be wanting to use the magnum service. So, it only make sense
>> to type openstack magnum cluster or mcluster which is shorter.
>> 
>> 
>> On 21 March 2017 at 02:24, Qiming Teng  wrote:
>>> 
>>> On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
 On 03/20/2017 03:08 PM, Adrian Otto wrote:
> Team,
> 
> Stephen Watson has been working on an magnum feature to add magnum
> commands to the openstack client by implementing a plugin:
> 
 
>> https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> 
> In review of this work, a question has resurfaced, as to what the
> client command name should be for magnum related commands. Naturally, we’d
> like to have the name “cluster” but that word is already in use by Senlin.
 
 Unfortunately, the Senlin API uses a whole bunch of generic terms as
 top-level REST resources, including "cluster", "event", "action",
 "profile", "policy", and "node". :( I've warned before that use of
 these generic terms in OpenStack APIs without a central group
 responsible for curating the API would lead to problems like this.
 This is why, IMHO, we need the API working group to be ultimately
 responsible for preventing this type of thing from happening.
 Otherwise, there ends up being a whole bunch of duplication and same
 terms being used for entirely different things.
 
>>> 
>>> Well, I believe the name and namespaces used by Senlin is very clean.
>>> Please see the following outputs. All commands are contained in the
>>> cluster namespace to avoid any conflicts with any other projects.
>>> 
>>> On the other hand, is there any document stating that Magnum is about
>>> providing clustering service? Why Magnum cares so much about the top
>>> level noun if it is not its business?
>> 
>> 
>> From magnum's wiki page [1]:
>> "Magnum uses Heat to orchestrate an OS image which contains Docker
>> and Kubernetes and runs that image in either virtual machines or bare
>> metal in a cluster configuration."
>> 
>> Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
>> but we can't refer to a collection of virtual machines or physical servers
>> with
>> another name. Bay proven to be confusing to users. I don't think that magnum
>> should reserve the cluster noun, even if it was available.
>> 
>> [1] https://wiki.openstack.org/wiki/Magnum
>> 
>>> 
>>> 
>>> 
>>> $ openstack --help | grep cluster
>>> 
>>>  --os-clustering-api-version 
>>> 
>>>  cluster action list  List actions.
>>>  cluster action show  Show detailed info about the specified action.
>>>  cluster build info  Retrieve build information.
>>>  cluster check  Check the cluster(s).
>>>  cluster collect  Collect attributes across a cluster.
>>>  cluster create  Create the cluster.
>>>  cluster delete  Delete the cluster(s).
>>>  cluster event list  List events.
>>>  cluster event show  Describe the event.
>>>  cluster expand  Scale out a cluster by the specified number of nodes.
>>>  cluster list   List the user's clusters.
>>>  cluster members add  Add specified nodes to cluster.
>>>  cluster members del  Delete specified nodes from cluster.
>>>  cluster members 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-22 Thread Ricardo Rocha
Hi.

One simplification would be:
openstack coe create/list/show/config/update
openstack coe template create/list/show/update
openstack coe ca show/sign

This covers all the required commands and is a bit less verbose. The
cluster word is too generic and probably adds no useful info.

Whatever it is, kerberos support for the magnum client is very much
needed and welcome! :)

Cheers,
  Ricardo

On Tue, Mar 21, 2017 at 2:54 PM, Spyros Trigazis  wrote:
> IMO, coe is a little confusing. It is a term used by people related somehow
> to the magnum community. When I describe to users how to use magnum,
> I spent a few moments explaining what we call coe.
>
> I prefer one of the following:
> * openstack magnum cluster create|delete|...
> * openstack mcluster create|delete|...
> * both the above
>
> It is very intuitive for users because, they will be using an openstack
> cloud
> and they will be wanting to use the magnum service. So, it only make sense
> to type openstack magnum cluster or mcluster which is shorter.
>
>
> On 21 March 2017 at 02:24, Qiming Teng  wrote:
>>
>> On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
>> > On 03/20/2017 03:08 PM, Adrian Otto wrote:
>> > >Team,
>> > >
>> > >Stephen Watson has been working on an magnum feature to add magnum
>> > > commands to the openstack client by implementing a plugin:
>> > >
>> >
>> > > >https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
>> > >
>> > >In review of this work, a question has resurfaced, as to what the
>> > > client command name should be for magnum related commands. Naturally, 
>> > > we’d
>> > > like to have the name “cluster” but that word is already in use by 
>> > > Senlin.
>> >
>> > Unfortunately, the Senlin API uses a whole bunch of generic terms as
>> > top-level REST resources, including "cluster", "event", "action",
>> > "profile", "policy", and "node". :( I've warned before that use of
>> > these generic terms in OpenStack APIs without a central group
>> > responsible for curating the API would lead to problems like this.
>> > This is why, IMHO, we need the API working group to be ultimately
>> > responsible for preventing this type of thing from happening.
>> > Otherwise, there ends up being a whole bunch of duplication and same
>> > terms being used for entirely different things.
>> >
>>
>> Well, I believe the name and namespaces used by Senlin is very clean.
>> Please see the following outputs. All commands are contained in the
>> cluster namespace to avoid any conflicts with any other projects.
>>
>> On the other hand, is there any document stating that Magnum is about
>> providing clustering service? Why Magnum cares so much about the top
>> level noun if it is not its business?
>
>
> From magnum's wiki page [1]:
> "Magnum uses Heat to orchestrate an OS image which contains Docker
> and Kubernetes and runs that image in either virtual machines or bare
> metal in a cluster configuration."
>
> Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
> but we can't refer to a collection of virtual machines or physical servers
> with
> another name. Bay proven to be confusing to users. I don't think that magnum
> should reserve the cluster noun, even if it was available.
>
> [1] https://wiki.openstack.org/wiki/Magnum
>
>>
>>
>>
>> $ openstack --help | grep cluster
>>
>>   --os-clustering-api-version 
>>
>>   cluster action list  List actions.
>>   cluster action show  Show detailed info about the specified action.
>>   cluster build info  Retrieve build information.
>>   cluster check  Check the cluster(s).
>>   cluster collect  Collect attributes across a cluster.
>>   cluster create  Create the cluster.
>>   cluster delete  Delete the cluster(s).
>>   cluster event list  List events.
>>   cluster event show  Describe the event.
>>   cluster expand  Scale out a cluster by the specified number of nodes.
>>   cluster list   List the user's clusters.
>>   cluster members add  Add specified nodes to cluster.
>>   cluster members del  Delete specified nodes from cluster.
>>   cluster members list  List nodes from cluster.
>>   cluster members replace  Replace the nodes in a cluster with
>>   specified nodes.
>>   cluster node check  Check the node(s).
>>   cluster node create  Create the node.
>>   cluster node delete  Delete the node(s).
>>   cluster node list  Show list of nodes.
>>   cluster node recover  Recover the node(s).
>>   cluster node show  Show detailed info about the specified node.
>>   cluster node update  Update the node.
>>   cluster policy attach  Attach policy to cluster.
>>   cluster policy binding list  List policies from cluster.
>>   cluster policy binding show  Show a specific policy that is bound to
>>   the specified cluster.
>>   cluster policy binding update  Update a policy's properties on a
>>   cluster.
>>   cluster policy create  Create a policy.
>>   cluster policy 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Qiming Teng
On Tue, Mar 21, 2017 at 10:50:13AM -0400, Jay Pipes wrote:
> On 03/20/2017 09:24 PM, Qiming Teng wrote:
> >On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> >>On 03/20/2017 03:08 PM, Adrian Otto wrote:
> >>>Team,
> >>>
> >>>Stephen Watson has been working on an magnum feature to add magnum 
> >>>commands to the openstack client by implementing a plugin:
> >>>
> >>>https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> >>>
> >>>In review of this work, a question has resurfaced, as to what the client 
> >>>command name should be for magnum related commands. Naturally, we’d like 
> >>>to have the name “cluster” but that word is already in use by Senlin.
> >>
> >>Unfortunately, the Senlin API uses a whole bunch of generic terms as
> >>top-level REST resources, including "cluster", "event", "action",
> >>"profile", "policy", and "node". :( I've warned before that use of
> >>these generic terms in OpenStack APIs without a central group
> >>responsible for curating the API would lead to problems like this.
> >>This is why, IMHO, we need the API working group to be ultimately
> >>responsible for preventing this type of thing from happening.
> >>Otherwise, there ends up being a whole bunch of duplication and same
> >>terms being used for entirely different things.
> >>
> >
> >Well, I believe the name and namespaces used by Senlin is very clean.
> 
> Note that above I referred to the Senlin *API*:
> 
> https://developer.openstack.org/api-ref/clustering/
> 
> The use of generic terms like "cluster", "node", "policy",
> "profile", "action", and "event" as *top-level resources in the REST
> API* are what I was warning about.
> 
> >Please see the following outputs. All commands are contained in the
> >cluster namespace to avoid any conflicts with any other projects.
> 
> Right, but I was talking about the REST API.
> 
> >On the other hand, is there any document stating that Magnum is about
> >providing clustering service?
> 
> What exactly is a clustering service?
> 
> I mean, Galera has a clustering service. Pacemaker has a clustering
> service. k8s has a clustering service. etcd has a clustering
> service. Zookeeper has a clustering service.
> 
> Senlin is an API that allows a user to group *virtual machines*
> together and expand or shrink that group of VMs. It's basically the
> old Heat autoscaling API done properly. There's a *lot* to like
> about Senlin's API and implementation.

Okay, I see where the confusion comes from. Senlin is designed to be a
*generic clustering service* that can create and manage whatever
resource types. It can create VM groups and manage the scaling of such
groups properly. It can provide VM HA based on the resource redundancy.
It models load-balancing support into a policy that can be attached to
and detached from a VM cluster.

Senlin manages "nodes" created from a "profile". A VM instance is only
one of the profile types supported. Senlin also supports clusters of
Heat stacks, clusters of docker containers today. There are also efforts
on managing bare-metal servers. 

The team also uses "resource pools" and "clusters" interchangeably,
because that IS what the service is about. Calling Senlin a resource
pool service may be more confusing, right?

- Qiming

> However, it would have been more appropriate (and forward-looking)
> to call Senlin's namespace "instance group" or "server group" than
> the generic term "cluster".
> 
> >  Why Magnum cares so much about the top
> >level noun if it is not its business?
> 
> Because Magnum uses the term "cluster" as a top-level resource in
> its own REST API:
> 
> http://git.openstack.org/cgit/openstack/magnum/tree/magnum/api/controllers/v1/cluster.py
> 
> The generic term "cluster" that Magnum uses should really be called
> "coe group" or "container engine group" or "container service group"
> or something like that, to better indicate what exactly is being
> operated on.
> 
> Best,
> -jay
> 
> >$ openstack --help | grep cluster
> >
> >  --os-clustering-api-version 
> >
> >  cluster action list  List actions.
> >  cluster action show  Show detailed info about the specified action.
> >  cluster build info  Retrieve build information.
> >  cluster check  Check the cluster(s).
> >  cluster collect  Collect attributes across a cluster.
> >  cluster create  Create the cluster.
> >  cluster delete  Delete the cluster(s).
> >  cluster event list  List events.
> >  cluster event show  Describe the event.
> >  cluster expand  Scale out a cluster by the specified number of nodes.
> >  cluster list   List the user's clusters.
> >  cluster members add  Add specified nodes to cluster.
> >  cluster members del  Delete specified nodes from cluster.
> >  cluster members list  List nodes from cluster.
> >  cluster members replace  Replace the nodes in a cluster with
> >  specified nodes.
> >  cluster node check  Check the node(s).
> >  cluster node create  Create the node.
> >  cluster node delete  Delete the 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Kumari, Madhuri
It seems the term COE is a valid term now. I am in favor of having “openstack 
coe cluster” or “openstack container cluster”.
Using the command “infra” is too generic and doesn’t relate to what Magnum is 
doing exactly.

Regards,
Madhuri

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: Tuesday, March 21, 2017 7:25 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum commands 
in osc?

IMO, coe is a little confusing. It is a term used by people related somehow
to the magnum community. When I describe to users how to use magnum,
I spent a few moments explaining what we call coe.

I prefer one of the following:
* openstack magnum cluster create|delete|...
* openstack mcluster create|delete|...
* both the above

It is very intuitive for users because, they will be using an openstack cloud
and they will be wanting to use the magnum service. So, it only make sense
to type openstack magnum cluster or mcluster which is shorter.


On 21 March 2017 at 02:24, Qiming Teng 
<teng...@linux.vnet.ibm.com<mailto:teng...@linux.vnet.ibm.com>> wrote:
On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> >Team,
> >
> >Stephen Watson has been working on an magnum feature to add magnum commands 
> >to the openstack client by implementing a plugin:
> >
> >https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> >
> >In review of this work, a question has resurfaced, as to what the client 
> >command name should be for magnum related commands. Naturally, we’d like to 
> >have the name “cluster” but that word is already in use by Senlin.
>
> Unfortunately, the Senlin API uses a whole bunch of generic terms as
> top-level REST resources, including "cluster", "event", "action",
> "profile", "policy", and "node". :( I've warned before that use of
> these generic terms in OpenStack APIs without a central group
> responsible for curating the API would lead to problems like this.
> This is why, IMHO, we need the API working group to be ultimately
> responsible for preventing this type of thing from happening.
> Otherwise, there ends up being a whole bunch of duplication and same
> terms being used for entirely different things.
>

Well, I believe the name and namespaces used by Senlin is very clean.
Please see the following outputs. All commands are contained in the
cluster namespace to avoid any conflicts with any other projects.

On the other hand, is there any document stating that Magnum is about
providing clustering service? Why Magnum cares so much about the top
level noun if it is not its business?

From magnum's wiki page [1]:
"Magnum uses Heat to orchestrate an OS image which contains Docker
and Kubernetes and runs that image in either virtual machines or bare
metal in a cluster configuration."

Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
but we can't refer to a collection of virtual machines or physical servers with
another name. Bay proven to be confusing to users. I don't think that magnum
should reserve the cluster noun, even if it was available.

[1] https://wiki.openstack.org/wiki/Magnum



$ openstack --help | grep cluster

  --os-clustering-api-version 

  cluster action list  List actions.
  cluster action show  Show detailed info about the specified action.
  cluster build info  Retrieve build information.
  cluster check  Check the cluster(s).
  cluster collect  Collect attributes across a cluster.
  cluster create  Create the cluster.
  cluster delete  Delete the cluster(s).
  cluster event list  List events.
  cluster event show  Describe the event.
  cluster expand  Scale out a cluster by the specified number of nodes.
  cluster list   List the user's clusters.
  cluster members add  Add specified nodes to cluster.
  cluster members del  Delete specified nodes from cluster.
  cluster members list  List nodes from cluster.
  cluster members replace  Replace the nodes in a cluster with
  specified nodes.
  cluster node check  Check the node(s).
  cluster node create  Create the node.
  cluster node delete  Delete the node(s).
  cluster node list  Show list of nodes.
  cluster node recover  Recover the node(s).
  cluster node show  Show detailed info about the specified node.
  cluster node update  Update the node.
  cluster policy attach  Attach policy to cluster.
  cluster policy binding list  List policies from cluster.
  cluster policy binding show  Show a specific policy that is bound to
  the specified cluster.
  cluster policy binding update  Update a policy's properties on a
  cluster.
  cluster policy create  Create a policy.
  cluster policy delete  Delet

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Spyros Trigazis
IMO, coe is a little confusing. It is a term used by people related somehow
to the magnum community. When I describe to users how to use magnum,
I spent a few moments explaining what we call coe.

I prefer one of the following:
* openstack magnum cluster create|delete|...
* openstack mcluster create|delete|...
* both the above

It is very intuitive for users because, they will be using an openstack
cloud
and they will be wanting to use the magnum service. So, it only make sense
to type openstack magnum cluster or mcluster which is shorter.


On 21 March 2017 at 02:24, Qiming Teng  wrote:

> On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> > On 03/20/2017 03:08 PM, Adrian Otto wrote:
> > >Team,
> > >
> > >Stephen Watson has been working on an magnum feature to add magnum
> commands to the openstack client by implementing a plugin:
> > >
> > >https://review.openstack.org/#/q/status:open+project:
> openstack/python-magnumclient+osc
> > >
> > >In review of this work, a question has resurfaced, as to what the
> client command name should be for magnum related commands. Naturally, we’d
> like to have the name “cluster” but that word is already in use by Senlin.
> >
> > Unfortunately, the Senlin API uses a whole bunch of generic terms as
> > top-level REST resources, including "cluster", "event", "action",
> > "profile", "policy", and "node". :( I've warned before that use of
> > these generic terms in OpenStack APIs without a central group
> > responsible for curating the API would lead to problems like this.
> > This is why, IMHO, we need the API working group to be ultimately
> > responsible for preventing this type of thing from happening.
> > Otherwise, there ends up being a whole bunch of duplication and same
> > terms being used for entirely different things.
> >
>
> Well, I believe the name and namespaces used by Senlin is very clean.
> Please see the following outputs. All commands are contained in the
> cluster namespace to avoid any conflicts with any other projects.
>
> On the other hand, is there any document stating that Magnum is about
> providing clustering service? Why Magnum cares so much about the top
> level noun if it is not its business?
>

>From magnum's wiki page [1]:
"Magnum uses Heat to orchestrate an OS image which contains Docker
and Kubernetes and runs that image in either virtual machines or bare
metal in a *cluster* configuration."

Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
but we can't refer to a collection of virtual machines or physical servers
with
another name. Bay proven to be confusing to users. I don't think that magnum
should reserve the cluster noun, even if it was available.

[1] https://wiki.openstack.org/wiki/Magnum


>
>
> $ openstack --help | grep cluster
>
>   --os-clustering-api-version 
>
>   cluster action list  List actions.
>   cluster action show  Show detailed info about the specified action.
>   cluster build info  Retrieve build information.
>   cluster check  Check the cluster(s).
>   cluster collect  Collect attributes across a cluster.
>   cluster create  Create the cluster.
>   cluster delete  Delete the cluster(s).
>   cluster event list  List events.
>   cluster event show  Describe the event.
>   cluster expand  Scale out a cluster by the specified number of nodes.
>   cluster list   List the user's clusters.
>   cluster members add  Add specified nodes to cluster.
>   cluster members del  Delete specified nodes from cluster.
>   cluster members list  List nodes from cluster.
>   cluster members replace  Replace the nodes in a cluster with
>   specified nodes.
>   cluster node check  Check the node(s).
>   cluster node create  Create the node.
>   cluster node delete  Delete the node(s).
>   cluster node list  Show list of nodes.
>   cluster node recover  Recover the node(s).
>   cluster node show  Show detailed info about the specified node.
>   cluster node update  Update the node.
>   cluster policy attach  Attach policy to cluster.
>   cluster policy binding list  List policies from cluster.
>   cluster policy binding show  Show a specific policy that is bound to
>   the specified cluster.
>   cluster policy binding update  Update a policy's properties on a
>   cluster.
>   cluster policy create  Create a policy.
>   cluster policy delete  Delete policy(s).
>   cluster policy detach  Detach policy from cluster.
>   cluster policy list  List policies that meet the criteria.
>   cluster policy show  Show the policy details.
>   cluster policy type list  List the available policy types.
>   cluster policy type show  Get the details about a policy type.
>   cluster policy update  Update a policy.
>   cluster policy validate  Validate a policy.
>   cluster profile create  Create a profile.
>   cluster profile delete  Delete profile(s).
>   cluster profile list  List profiles that meet the criteria.
>   cluster profile show  Show profile details.
>   

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Jay Pipes

On 03/20/2017 09:24 PM, Qiming Teng wrote:

On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:

On 03/20/2017 03:08 PM, Adrian Otto wrote:

Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin.


Unfortunately, the Senlin API uses a whole bunch of generic terms as
top-level REST resources, including "cluster", "event", "action",
"profile", "policy", and "node". :( I've warned before that use of
these generic terms in OpenStack APIs without a central group
responsible for curating the API would lead to problems like this.
This is why, IMHO, we need the API working group to be ultimately
responsible for preventing this type of thing from happening.
Otherwise, there ends up being a whole bunch of duplication and same
terms being used for entirely different things.



Well, I believe the name and namespaces used by Senlin is very clean.


Note that above I referred to the Senlin *API*:

https://developer.openstack.org/api-ref/clustering/

The use of generic terms like "cluster", "node", "policy", "profile", 
"action", and "event" as *top-level resources in the REST API* are what 
I was warning about.



Please see the following outputs. All commands are contained in the
cluster namespace to avoid any conflicts with any other projects.


Right, but I was talking about the REST API.


On the other hand, is there any document stating that Magnum is about
providing clustering service?


What exactly is a clustering service?

I mean, Galera has a clustering service. Pacemaker has a clustering 
service. k8s has a clustering service. etcd has a clustering service. 
Zookeeper has a clustering service.


Senlin is an API that allows a user to group *virtual machines* together 
and expand or shrink that group of VMs. It's basically the old Heat 
autoscaling API done properly. There's a *lot* to like about Senlin's 
API and implementation.


However, it would have been more appropriate (and forward-looking) to 
call Senlin's namespace "instance group" or "server group" than the 
generic term "cluster".


>  Why Magnum cares so much about the top

level noun if it is not its business?


Because Magnum uses the term "cluster" as a top-level resource in its 
own REST API:


http://git.openstack.org/cgit/openstack/magnum/tree/magnum/api/controllers/v1/cluster.py

The generic term "cluster" that Magnum uses should really be called "coe 
group" or "container engine group" or "container service group" or 
something like that, to better indicate what exactly is being operated on.


Best,
-jay


$ openstack --help | grep cluster

  --os-clustering-api-version 

  cluster action list  List actions.
  cluster action show  Show detailed info about the specified action.
  cluster build info  Retrieve build information.
  cluster check  Check the cluster(s).
  cluster collect  Collect attributes across a cluster.
  cluster create  Create the cluster.
  cluster delete  Delete the cluster(s).
  cluster event list  List events.
  cluster event show  Describe the event.
  cluster expand  Scale out a cluster by the specified number of nodes.
  cluster list   List the user's clusters.
  cluster members add  Add specified nodes to cluster.
  cluster members del  Delete specified nodes from cluster.
  cluster members list  List nodes from cluster.
  cluster members replace  Replace the nodes in a cluster with
  specified nodes.
  cluster node check  Check the node(s).
  cluster node create  Create the node.
  cluster node delete  Delete the node(s).
  cluster node list  Show list of nodes.
  cluster node recover  Recover the node(s).
  cluster node show  Show detailed info about the specified node.
  cluster node update  Update the node.
  cluster policy attach  Attach policy to cluster.
  cluster policy binding list  List policies from cluster.
  cluster policy binding show  Show a specific policy that is bound to
  the specified cluster.
  cluster policy binding update  Update a policy's properties on a
  cluster.
  cluster policy create  Create a policy.
  cluster policy delete  Delete policy(s).
  cluster policy detach  Detach policy from cluster.
  cluster policy list  List policies that meet the criteria.
  cluster policy show  Show the policy details.
  cluster policy type list  List the available policy types.
  cluster policy type show  Get the details about a policy type.
  cluster policy update  Update a policy.
  cluster policy validate  Validate a policy.
  cluster profile create  Create a profile.
  cluster profile delete  Delete profile(s).
  cluster profile list  List profiles that meet the criteria.
  cluster 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Anne Gentle
On Mon, Mar 20, 2017 at 4:38 PM, Dean Troyer  wrote:

> On Mon, Mar 20, 2017 at 4:36 PM, Adrian Otto 
> wrote:
> > So, to be clear, this would result in the following command for what we
> currently use “magnum cluster create” for:
> >
> > openstack coe cluster create …
> >
> > Is this right?
>
> Yes.
>
>
This looks good to me as an OSC user.

One other question, I honestly can't remember if the projects.yaml name
needs to match the service catalog name? Might be a good time to synch
everything if so. Right now, it's "Container Infrastructure Management
service" and could be Container Orchestration Engine Management service.

Naming, it's hard.
Anne


> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Monty Taylor
On 03/20/2017 08:16 PM, Dean Troyer wrote:
> On Mon, Mar 20, 2017 at 5:52 PM, Monty Taylor  wrote:
>>> [Hongbin Lu]
>>> I think the style would be more consistent if all the resources are 
>>> qualified or un-qualified, not the mix of both.
> 
>> So - swift got here first, it wins, it gets container. The fine folks in
>> barbican, rather than calling a thing a container and then needing to
>> call it a secret-container - maybe could call their thing a vault or a
>> locker or a safe or a lockbox or an oubliette. (for instance)
> 
> Right, there _were_ only 5 projects when we started this and we
> re-used most of the original project-specific names.  Swift is a
> particularly fun one because both 'container' and 'object' are
> extrement useful in that context, but both are also extremely generic,
> and 'object container', well, what is that?
> 
>> I do not have any suggestions for things that actually return a resource
>> that are a single "linux container" - since swift called their thing a
>> container before docker was written and popularized the word to mean
>> something different. We might just get to be fun and different - sort of
>> like how Emacs calls cut/paste "kill" and "yank" (if you're not an Emacs
>> user, you "kill" text into the kill ring and then you "yank" from the
>> ring into the current document.
> 
> Monty, grab your Tardis and follow me around the Austin summit and
> listen to the opinions I get for doing things like this :)

Which Austin summit - haven't we been at two together now?. ;)

>> OTOH, I think Dean has talked about more verbose terms and then aliases
>> for backwards compat. So maybe a swift container is always an
>> "object_container" - but because of history it gets to also be
>> unqualified "container" - but then we could have "object container" and
>> "secret container" and "linux container" ... similarly we could have
>> "server flavor" and "volume flavor" ... etc.
> 
> Yes, we do have plans to go back and qualify some of these resource
> names to be consistent, but the current names will probably never
> change, we'll just have the qualified names for those who prefer to
> use them.
> 
> Flavor is my favorite example of this as we add network flavor, and
> others.  It also illustrates the 'it isn't a namespace' as it will
> become 'server flavor' rather than 'compute flavor'.

Yes - that's an excellent example.

I think one of the most important thing to realize is that our project
organization is much less interesting to our API consumers than it is to
developers and operators. _especially_ when some things move their
project home over time. (is it compute floating-ip? is it network
floating-ip?) And that a single project could have more than one thing
that is similar in different contexts (we have both a ComputeUsage and a
ServerUsage - with ServerUsage being the usage for a specific server
while ComputeUsage is the aggregate compute usage for a project)

Yay naming!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Qiming Teng
On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> >Team,
> >
> >Stephen Watson has been working on an magnum feature to add magnum commands 
> >to the openstack client by implementing a plugin:
> >
> >https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> >
> >In review of this work, a question has resurfaced, as to what the client 
> >command name should be for magnum related commands. Naturally, we’d like to 
> >have the name “cluster” but that word is already in use by Senlin.
> 
> Unfortunately, the Senlin API uses a whole bunch of generic terms as
> top-level REST resources, including "cluster", "event", "action",
> "profile", "policy", and "node". :( I've warned before that use of
> these generic terms in OpenStack APIs without a central group
> responsible for curating the API would lead to problems like this.
> This is why, IMHO, we need the API working group to be ultimately
> responsible for preventing this type of thing from happening.
> Otherwise, there ends up being a whole bunch of duplication and same
> terms being used for entirely different things.
> 

Well, I believe the name and namespaces used by Senlin is very clean.
Please see the following outputs. All commands are contained in the
cluster namespace to avoid any conflicts with any other projects.

On the other hand, is there any document stating that Magnum is about
providing clustering service? Why Magnum cares so much about the top
level noun if it is not its business?


$ openstack --help | grep cluster

  --os-clustering-api-version 

  cluster action list  List actions.
  cluster action show  Show detailed info about the specified action.
  cluster build info  Retrieve build information.
  cluster check  Check the cluster(s).
  cluster collect  Collect attributes across a cluster.
  cluster create  Create the cluster.
  cluster delete  Delete the cluster(s).
  cluster event list  List events.
  cluster event show  Describe the event.
  cluster expand  Scale out a cluster by the specified number of nodes.
  cluster list   List the user's clusters.
  cluster members add  Add specified nodes to cluster.
  cluster members del  Delete specified nodes from cluster.
  cluster members list  List nodes from cluster.
  cluster members replace  Replace the nodes in a cluster with
  specified nodes.
  cluster node check  Check the node(s).
  cluster node create  Create the node.
  cluster node delete  Delete the node(s).
  cluster node list  Show list of nodes.
  cluster node recover  Recover the node(s).
  cluster node show  Show detailed info about the specified node.
  cluster node update  Update the node.
  cluster policy attach  Attach policy to cluster.
  cluster policy binding list  List policies from cluster.
  cluster policy binding show  Show a specific policy that is bound to
  the specified cluster.
  cluster policy binding update  Update a policy's properties on a
  cluster.
  cluster policy create  Create a policy.
  cluster policy delete  Delete policy(s).
  cluster policy detach  Detach policy from cluster.
  cluster policy list  List policies that meet the criteria.
  cluster policy show  Show the policy details.
  cluster policy type list  List the available policy types.
  cluster policy type show  Get the details about a policy type.
  cluster policy update  Update a policy.
  cluster policy validate  Validate a policy.
  cluster profile create  Create a profile.
  cluster profile delete  Delete profile(s).
  cluster profile list  List profiles that meet the criteria.
  cluster profile show  Show profile details.
  cluster profile type list  List the available profile types.
  cluster profile type show  Show the details about a profile type.
  cluster profile update  Update a profile.
  cluster profile validate  Validate a profile.
  cluster receiver create  Create a receiver.
  cluster receiver delete  Delete receiver(s).
  cluster receiver list  List receivers that meet the criteria.
  cluster receiver show  Show the receiver details.
  cluster recover  Recover the cluster(s).
  cluster resize  Resize a cluster.
  cluster runRun scripts on cluster.
  cluster show   Show details of the cluster.
  cluster shrink  Scale in a cluster by the specified number of nodes.
  cluster template list  List Cluster Templates.
  cluster update  Update the cluster.

- Qiming

> >Stephen opened a discussion with Dean Troyer about this, and found
> that “infra” might be a suitable name and began using that, and
> multiple team members are not satisfied with it.
> 
> Yeah, not sure about "infra". That is both too generic and not an
> actual "thing" that Magnum provides.
> 
> > The name “magnum” was excluded from consideration because OSC aims
> to be project name agnostic. We know that no matter what word we
> pick, it’s not going to be ideal. I’ve added an agenda on our
> upcoming team meeting to judge community 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Dean Troyer
On Mon, Mar 20, 2017 at 5:52 PM, Monty Taylor  wrote:
>> [Hongbin Lu]
>> I think the style would be more consistent if all the resources are 
>> qualified or un-qualified, not the mix of both.

> So - swift got here first, it wins, it gets container. The fine folks in
> barbican, rather than calling a thing a container and then needing to
> call it a secret-container - maybe could call their thing a vault or a
> locker or a safe or a lockbox or an oubliette. (for instance)

Right, there _were_ only 5 projects when we started this and we
re-used most of the original project-specific names.  Swift is a
particularly fun one because both 'container' and 'object' are
extrement useful in that context, but both are also extremely generic,
and 'object container', well, what is that?

> I do not have any suggestions for things that actually return a resource
> that are a single "linux container" - since swift called their thing a
> container before docker was written and popularized the word to mean
> something different. We might just get to be fun and different - sort of
> like how Emacs calls cut/paste "kill" and "yank" (if you're not an Emacs
> user, you "kill" text into the kill ring and then you "yank" from the
> ring into the current document.

Monty, grab your Tardis and follow me around the Austin summit and
listen to the opinions I get for doing things like this :)

> OTOH, I think Dean has talked about more verbose terms and then aliases
> for backwards compat. So maybe a swift container is always an
> "object_container" - but because of history it gets to also be
> unqualified "container" - but then we could have "object container" and
> "secret container" and "linux container" ... similarly we could have
> "server flavor" and "volume flavor" ... etc.

Yes, we do have plans to go back and qualify some of these resource
names to be consistent, but the current names will probably never
change, we'll just have the qualified names for those who prefer to
use them.

Flavor is my favorite example of this as we add network flavor, and
others.  It also illustrates the 'it isn't a namespace' as it will
become 'server flavor' rather than 'compute flavor'.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Monty Taylor
On 03/20/2017 05:39 PM, Hongbin Lu wrote:
> 
> 
>> -Original Message-
>> From: Dean Troyer [mailto:dtro...@gmail.com]
>> Sent: March-20-17 5:19 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
>> commands in osc?
>>
>> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto <adrian.o...@rackspace.com>
>> wrote:
>>> the  argument is actually the service name, such as “ec2”.
>> This is the same way the openstack cli works. Perhaps there is another
>> tool that you are referring to. Have I misunderstood something?
>>
>> I am going to jump in here and clarify one thing.  OSC does not do
>> project namespacing, or any other sort of namespacing for its resource
>> names.  It uses qualified resource names (fully-qualified even?).  In
>> some cases this results in something that looks a lot like namespacing,
>> but it isn't. The Volume API commands are one example of this, nearly
>> every resource there includes the word 'volume' but not because that is
>> the API name, it is because that is the correct name for those
>> resources ('volume backup', etc).
> 
> [Hongbin Lu] I might provide a minority point of view here. What confused me 
> is inconsistent style of the resource name. For example, there is a 
> "container" resource for a swift container, and there is "secret container" 
> resource a barbican container. I just found it odd to have both un-qualified 
> resource (i.e. container) and qualified resource name (i.e. secret container) 
> in the same CLI. It appears to me that some resources are namespaced and 
> others are not, and this kind of style provides a suboptimal user experiences 
> from my point of view.
> 
> I think the style would be more consistent if all the resources are qualified 
> or un-qualified, not the mix of both.

Yes - if we had been more forward thinking a while back, I think we
could do that. However, some things are already done and changing them
would be an incredible amount of churn.

In my happy world, we would all consider the resource names that exist
across the openstack projects before we make new ones.

So - swift got here first, it wins, it gets container. The fine folks in
barbican, rather than calling a thing a container and then needing to
call it a secret-container - maybe could call their thing a vault or a
locker or a safe or a lockbox or an oubliette. (for instance)

I do not have any suggestions for things that actually return a resource
that are a single "linux container" - since swift called their thing a
container before docker was written and popularized the word to mean
something different. We might just get to be fun and different - sort of
like how Emacs calls cut/paste "kill" and "yank" (if you're not an Emacs
user, you "kill" text into the kill ring and then you "yank" from the
ring into the current document.

OTOH, I think Dean has talked about more verbose terms and then aliases
for backwards compat. So maybe a swift container is always an
"object_container" - but because of history it gets to also be
unqualified "container" - but then we could have "object container" and
"secret container" and "linux container" ... similarly we could have
"server flavor" and "volume flavor" ... etc.

(fwiw, shade just picks winners - so "create_container" gets you a swift
container. No clue what we'll do when we add barbican or zun yet ...
mabye the same thing?)
>>
>>> We could so the same thing and use the text “container_infra”, but we
>> felt that might be burdensome for interactive use and wanted to find
>> something shorter that would still make sense.
>>
>> Naming resources is hard to get right.  Here's my throught process:
>>
>> For OSC, start with how to describe the specific 'thing' being
>> manipulated.  In this case, it is some kind of cluster.  In the list
>> you posted in the first email, 'coe cluster' seems to be the best
>> option.  I think 'coe' is acceptable as an abbreviation (we usually do
>> not use them) because that is a specific term used in the field and
>> satisfies the 'what kind of cluster?' question.  No underscores please,
>> and in fact no dash here, resource names have spaces in them.
>>
>> dt
>>
>> --
>>
>> Dean Troyer
>> dtro...@gmail.com
>>
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:uns

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu


> -Original Message-
> From: Dean Troyer [mailto:dtro...@gmail.com]
> Sent: March-20-17 5:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
> commands in osc?
> 
> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
> > the  argument is actually the service name, such as “ec2”.
> This is the same way the openstack cli works. Perhaps there is another
> tool that you are referring to. Have I misunderstood something?
> 
> I am going to jump in here and clarify one thing.  OSC does not do
> project namespacing, or any other sort of namespacing for its resource
> names.  It uses qualified resource names (fully-qualified even?).  In
> some cases this results in something that looks a lot like namespacing,
> but it isn't. The Volume API commands are one example of this, nearly
> every resource there includes the word 'volume' but not because that is
> the API name, it is because that is the correct name for those
> resources ('volume backup', etc).

[Hongbin Lu] I might provide a minority point of view here. What confused me is 
inconsistent style of the resource name. For example, there is a "container" 
resource for a swift container, and there is "secret container" resource a 
barbican container. I just found it odd to have both un-qualified resource 
(i.e. container) and qualified resource name (i.e. secret container) in the 
same CLI. It appears to me that some resources are namespaced and others are 
not, and this kind of style provides a suboptimal user experiences from my 
point of view.

I think the style would be more consistent if all the resources are qualified 
or un-qualified, not the mix of both.

> 
> > We could so the same thing and use the text “container_infra”, but we
> felt that might be burdensome for interactive use and wanted to find
> something shorter that would still make sense.
> 
> Naming resources is hard to get right.  Here's my throught process:
> 
> For OSC, start with how to describe the specific 'thing' being
> manipulated.  In this case, it is some kind of cluster.  In the list
> you posted in the first email, 'coe cluster' seems to be the best
> option.  I think 'coe' is acceptable as an abbreviation (we usually do
> not use them) because that is a specific term used in the field and
> satisfies the 'what kind of cluster?' question.  No underscores please,
> and in fact no dash here, resource names have spaces in them.
> 
> dt
> 
> --
> 
> Dean Troyer
> dtro...@gmail.com
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2017-03-20 22:19:14 +:
> I was unsure, so I found him on IRC to clarify, and he pointed me to the 
> openstack/service-types-authority repository, where I submitted patch 445694 
> for review. We have three distinct identifiers in play:
> 
> 1) Our existing service catalog entry name: container-infra
> 2) Our openstack client noun: TBD, decision expected from our team tomorrow. 
> My suggestion: "coe cluster”.
> 3) Our (proposed) service type: coe-cluster
> 
> Each identifier has respective guidelines and limits, so they differ.
> 
> Adrian

Oh neat, I didn't even know that repository existed. TIL.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Clint,

On Mar 20, 2017, at 3:02 PM, Clint Byrum 
> wrote:

Excerpts from Adrian Otto's message of 2017-03-20 21:16:09 +:
Jay,

On Mar 20, 2017, at 12:35 PM, Jay Pipes 
> 
wrote:

On 03/20/2017 03:08 PM, Adrian Otto wrote:
Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin.

Unfortunately, the Senlin API uses a whole bunch of generic terms as top-level 
REST resources, including "cluster", "event", "action", "profile", "policy", 
and "node". :( I've warned before that use of these generic terms in OpenStack 
APIs without a central group responsible for curating the API would lead to 
problems like this. This is why, IMHO, we need the API working group to be 
ultimately responsible for preventing this type of thing from happening. 
Otherwise, there ends up being a whole bunch of duplication and same terms 
being used for entirely different things.

Stephen opened a discussion with Dean Troyer about this, and found that “infra” 
might be a suitable name and began using that, and multiple team members are 
not satisfied with it.

Yeah, not sure about "infra". That is both too generic and not an actual 
"thing" that Magnum provides.

The name “magnum” was excluded from consideration because OSC aims to be 
project name agnostic. We know that no matter what word we pick, it’s not going 
to be ideal. I’ve added an agenda on our upcoming team meeting to judge 
community consensus about which alternative we should select:

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

* c_cluster (possible abbreviation alias for container_infra_cluster)
* coe_cluster
* mcluster
* infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.

What is Magnum's service-types-authority service_type?

I propose "coe-cluster” for that, but that should be discussed further, as it’s 
impossible for magnum to conform with all the requirements for service types 
because they fundamentally conflict with each other:

https://review.openstack.org/447694

In the past we referred to this type as a “bay” but found it burdensome for 
users and operators to use that term when literally bay == cluster. We just 
needed to call it what it is because there’s a prevailing name for that 
concept, and everyone expects that’s what it’s called.

I Think Jay was asking for Magnum's name in the catalog:

Which is 'container-infra' according to this:

https://github.com/openstack/python-magnumclient/blob/master/magnumclient/v1/client.py#L34

I was unsure, so I found him on IRC to clarify, and he pointed me to the 
openstack/service-types-authority repository, where I submitted patch 445694 
for review. We have three distinct identifiers in play:

1) Our existing service catalog entry name: container-infra
2) Our openstack client noun: TBD, decision expected from our team tomorrow. My 
suggestion: "coe cluster”.
3) Our (proposed) service type: coe-cluster

Each identifier has respective guidelines and limits, so they differ.

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2017-03-20 21:16:09 +:
> Jay,
> 
> On Mar 20, 2017, at 12:35 PM, Jay Pipes 
> > wrote:
> 
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> Team,
> 
> Stephen Watson has been working on an magnum feature to add magnum commands 
> to the openstack client by implementing a plugin:
> 
> https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> 
> In review of this work, a question has resurfaced, as to what the client 
> command name should be for magnum related commands. Naturally, we’d like to 
> have the name “cluster” but that word is already in use by Senlin.
> 
> Unfortunately, the Senlin API uses a whole bunch of generic terms as 
> top-level REST resources, including "cluster", "event", "action", "profile", 
> "policy", and "node". :( I've warned before that use of these generic terms 
> in OpenStack APIs without a central group responsible for curating the API 
> would lead to problems like this. This is why, IMHO, we need the API working 
> group to be ultimately responsible for preventing this type of thing from 
> happening. Otherwise, there ends up being a whole bunch of duplication and 
> same terms being used for entirely different things.
> 
> >Stephen opened a discussion with Dean Troyer about this, and found that 
> >“infra” might be a suitable name and began using that, and multiple team 
> >members are not satisfied with it.
> 
> Yeah, not sure about "infra". That is both too generic and not an actual 
> "thing" that Magnum provides.
> 
> > The name “magnum” was excluded from consideration because OSC aims to be 
> > project name agnostic. We know that no matter what word we pick, it’s not 
> > going to be ideal. I’ve added an agenda on our upcoming team meeting to 
> > judge community consensus about which alternative we should select:
> 
> https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC
> 
> Current choices on the table are:
> 
>  * c_cluster (possible abbreviation alias for container_infra_cluster)
>  * coe_cluster
>  * mcluster
>  * infra
> 
> For example, our selected name would appear in “openstack …” commands. Such 
> as:
> 
> $ openstack c_cluster create …
> 
> If you have input to share, I encourage you to reply to this thread, or come 
> to the team meeting so we can consider your input before the team makes a 
> selection.
> 
> What is Magnum's service-types-authority service_type?
> 
> I propose "coe-cluster” for that, but that should be discussed further, as 
> it’s impossible for magnum to conform with all the requirements for service 
> types because they fundamentally conflict with each other:
> 
> https://review.openstack.org/447694
> 
> In the past we referred to this type as a “bay” but found it burdensome for 
> users and operators to use that term when literally bay == cluster. We just 
> needed to call it what it is because there’s a prevailing name for that 
> concept, and everyone expects that’s what it’s called.

I Think Jay was asking for Magnum's name in the catalog:

Which is 'container-infra' according to this:

https://github.com/openstack/python-magnumclient/blob/master/magnumclient/v1/client.py#L34

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Dean Troyer
On Mon, Mar 20, 2017 at 4:36 PM, Adrian Otto  wrote:
> So, to be clear, this would result in the following command for what we 
> currently use “magnum cluster create” for:
>
> openstack coe cluster create …
>
> Is this right?

Yes.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   9   10   >