Kai,
+1 for add it to baymodel, but I don’t see many use cases when people need to
change it. And if they really need to change it they can always modify heat
template.
-1 for opening it just for admins. I think everyone who create a model should
be able specify it the same way as
+1 for non-Barbican support first, unfortunately Barbican is not very well
adopted in existing installation.
Madhuri, also please keep in mind we should come with solution which should
work with Swarm and Mesos as well in further.
—
Egor
From: Madhuri Rai
Vikas,
Could you clarify what do you mean under ’status’? I don’t seed this command in
kubectl, so I assume it is get or describe?
Also for Docker, is it info, inspect or stats? We can get app/container details
through Marathon API in Mesos, but it’s very depend what information we are
looking
Eli,
First of all I would like to say thank you for your effort (I never seen so
many path sets ;)), but I don’t think we should remove “tls_disabled=True”
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work for
some reasons.
I think
Eli,
you are correct Swarm support only one active/passive deployment model, but
according to Docker documentation
https://docs.docker.com/swarm/multi-manager-setup/
even replica can handle user request “You can use the docker command on any
Docker Swarm primary manager or any replica."
it
Steve,actually Kub is moving to fully containerize model when you need only
kublet running at the host
(https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
and all other services will come in containers (e.g. ui
http://kubernetes.io/v1.0/docs/user-guide/ui.html). So we
Michal/Steve,
could you elaborate about choosing Marathon vs Aurora vs custom scheduler
(to implement very precise control around placement/failures/etc)?
‹
Egor
On 11/2/15, 22:44, "Michal Rostecki" wrote:
>Hi,
>
>+1 to what Steven said about Kubernetes.
>
>I'd like
Gal, thx I a lot. I have created the pool
http://doodle.com/poll/udpdw77evdpnsaq6 where everyone can vote for time slot.
—
Egor
From: Gal Sagie >
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Adrian,
I agree with Steve, otherwise it’s hard to find balance what should go to quick
start guide (e.g. many operators worry about cpu or I/O instead of memory).
Also I belve auto-scalling deserve it’s own detail document.
—
Egor
From: Adrian Otto
Ryan
I haven’t seen any proposals/implementations from Mesos/Swarm (but I am not
following Mesos and Swam community very close these days).
But Kubernetes 1.1 has pod autoscaling
(https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md),
which should cover
Adrian, agree with your points. But I think we should discuss it during the
next team meeting and address/answer all concerns which team members may have.
Grzegorz, can you join?
—
Egor
From: Adrian Otto adrian.o...@rackspace.commailto:adrian.o...@rackspace.com
Reply-To: OpenStack Development
+1 for stop using public discovery endpoint, most private cloud vms doesn’t
have access to internet and operator must to run etcd instance somewhere just
for discovery.
—
Egor
From: Andrew Melton
>
Reply-To: "OpenStack
Also I belive docker compose is just command line tool which doesn’t have any
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.
―
Kris,
We are facing similar challenges/questions and there are some thoughts. We
cannot ignore scalability limits: Kub ~ 100 nodes (there are plans to support
1K next year), Swarm ~ ??? (I never heard
even about 100 nodes, definitely not ready for production yet (happy to be
wrong ;))), Mesos
r Orchestration Engine
Deployment as a Service.
On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the correspo
+1, to Hongbin’s concerns about exposing passwords. I think we should start
with dedicated kub user in magnum config and moved to keystone domains after.
I just wondering how how Kuryr team planning to solve similar issue (I believe
libnetwork driver require Neutron’s credentials). Can someone
Vilobh/Tim, could you elaborate about your use-cases around Magnum quota?
My concern is that user will be easy lost in quotas ;). e.g. we already have
nova/cinder/neutron and Kub/Mesos(Framework) quotas.
There are two use-cases:
- tenant has it’s own list of bays/clusters (nova/cinder/neutron
Gal,
I think you need to setup your Docker environment to allow run cli without sudo
permission (https://docs.docker.com/engine/installation/ubuntulinux/).
Or use tcp socket instead (https://docs.docker.com/v1.8/articles/basics/),
Magnum/Swarm/docker-machine uses this approach all the time.
—
Clark,
What about ephemeral storage at OVH vms? I see may storage related errors (see
full output below) these days.
Basically it means Docker cannot create storage device at local drive
-- Logs begin at Mon 2015-12-14 06:40:09 UTC, end at Mon 2015-12-14 07:00:38
UTC. --
Dec 14 06:45:50
Hongbin,
I belive most failures are related to containers tests. Maybe we should comment
only them out and keep Swarm cluster provisioning.
Thoughts?
—
Egor
On Jan 8, 2016, at 06:37, Hongbin Lu
> wrote:
Done:
Jay,
"A/B testing" for PROD Infra sounds very cool ;) (we are doing it with business
apps all the time, but stuck with canary, incremental rollout or blue-green (if
we have enough capacity ;)) deployments for infra), do you mind share details
how are you doing it? My concern is that you need
Wanghua,
I don’t think moving flannel to the container is good idea. This is setup great
for dev environment, but become too complex from operator point of view (you
add extra Docker daemon and need extra Cinder volume for this daemon, also
keep in mind it makes sense to keep etcd data folder
+1, I found that 'kubectl create -f FILENAME’
(https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/kubectl/kubectl_create.md)
works very well for different type of objects and I think we should try to use
it.
but I think we should support two use-cases
- 'magnum
, Hongbin Lu
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
There are other symptoms as well, which I have no idea without a deep dip.
-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-08-16 2:14 PM
To: openstack-dev@lists.o
ded.
Ton Ngo,
Hongbin Lu ---01/18/2016 12:29:09 PM---Hi Egor, Thanks for
investigating on the issue. I will review the patch. Agreed. We can definitely e
From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: Egor Guz <e...@walmartlabs.com<mailto:e...@walmartlabs
25 matches
Mail list logo