Hi all,
happy to announce the latest release of our canary-releasing and micro
scaling solution Vamp. We’ve added Docker Compose import support, reverse
proxy features etc.:
http://vamp.io/documentation/release-notes/version-0-9-4/
For K8s specific instructions:
At least for the few packages that I tested, they seem to be there.
Try opening repodata/primary.xml and check the location tags.They will
point to where the files actually are.
Are you having some trouble using the repos with yum itself?
On Mon, Apr 17, 2017 at 9:36 PM, Brandon Philips
Adding SIG Cluster Lifecycle.
On Thu, Apr 13, 2017 at 4:45 AM wrote:
> Is there any problem with YUM repos for Kubernetes today, or perhaps some
> automated build/release process failed?
>
> There's still the "repodata" directory and meta files, but actual packages
> are gone.
Hello John-
Today, etcd will not enforce IP SANs but we just merged a change where it
will enforce them IF they exist. Expect this change in a future release of
etcd v3.2.[1]
I forget the exact details on why IP SANs are necessary in the
certificates. However, IIRC there are some places in
I deleted it with command line: kubectl delete deployment -n kloud-hosting
kloud-phph7 or the dashboard, don't remember which one.
BTW, the dashboard has different version from kubernetes, it's from kubernetes
1.6.0, could this be the root cause ?
On Monday, April 17, 2017 at 9:40:39 PM UTC+8,
On Mon, Apr 17, 2017 at 12:53:28PM -0700, gmarce...@gmail.com wrote:
> i have some raspberry py distributed in a bulding. basicaly i use them like
> sensor detector.
> i want to manage which docker version running on it get the status of them
> see the cpu usage ...
Can you please elaborate on
i have some raspberry py distributed in a bulding. basicaly i use them like
sensor detector.
i want to manage which docker version running on it get the status of them see
the cpu usage ...
i don't know if kubernetes can do that i see it is build to manage cluster but
can i use for my usage ?
Nothing in that screenshot looks wrong and I can't tell what is going on in
the "az" screenshot. All I see is a list of random pods, something trying
to randomly talk to localhost and a failed attempt to login to the 'az'
cli...
I have nothing to go off of here. I still don't even know what is
In a CoreOS cluster migrating from fleet to Kubernetes (initial planning
stage), the CA is part of FreeIPA, which refuses to issue certs with IP
SANs [1]. However, the CoreOS Kubernetes [2] and other documentation all
call for issuing certs with IP SANs. Is this a strict requirement, or can
Thanks for linking to the docs Mengqi. It looks like I was not aware of
imperative management using configuration files. It looks like "kubectl
replace" is the command accompanying to "kubectl create". In this case,
it's by design. I assumed "kubectl create -f" is probably on its way to
I discovered that the issue was a certificate that didn't include the
common name to match the ingress host.
More details here: https://github.com/kubernetes/ingress/issues/616
On Friday, April 14, 2017 at 10:04:23 AM UTC-5, Daniel Watrous wrote:
>
> I am using the nginx ingress controller on
This may be a little different. I just posted a new issue:
https://github.com/kubernetes/ingress/issues/616
For some unknown reason, this works on one cluster but not on another. The
nginx.conf isn't being written with the TLS certificates and told to listen
on port 443.
One idea that I may
The following is the file used to create the Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kloud-php7
namespace: kloud-hosting
spec:
replicas: 1
template:
metadata:
labels:
app: kloud-php7
spec:
containers:
- name: kloud-php7
13 matches
Mail list logo