On Wed, Aug 8, 2018 at 3:07 PM Jay Pipes wrote:
> cadvisor is a library dependency for kubelet:
>
>
> https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cadvisor/BUILD#L29-L31
>
> so when you install kubelet, you will install the cadvisor library.
>
> Prometheus does not use
You may be interested in the Prometheus Operator.
https://coreos.com/blog/the-prometheus-operator.html
https://coreos.com/operators/prometheus/docs/latest/user-guides/getting-started.html
https://github.com/coreos/prometheus-operator
On Wed, Aug 8, 2018 at 8:06 AM Niranjan Kolly
wrote:
> Hi
No, there is no technical limit to the number of namespaces. From a
practical perspective, I've heard that ~1000 is where there starts to be
some scaling issues with the current releases.
On Mon, May 14, 2018 at 5:39 AM, wrote:
> Hi,
>
> I'm pretty new to k8s and we've
My usual recommendation is to have highly granular namespaces. One pattern
I've seen is that every unique instance of an app has a separate
namespace. This allows you to isolate concerns.
For example, an API, Database, worker queue, and monitoring for a single
app all live in an isolated
Kuberentes will be giving a /24 to each node, not each pod. Each node will
give one IP out of that /24 to a pod it controls. This default means you
can have 253 pods-per-node. This of course can be adjust depending on the
size of your pods and nodes.
This means that you can fully utilize the
On Thu, Jul 27, 2017 at 8:35 AM, wangpeng007 wrote:
> Thanks for all your guys answers!
> I have taken your advise and grabe all metrics from cadvisor and filter it
> by service!
> Thanks again!
> I still have a small question:
> As containers has many status like:
>
>
Yes, that is correct.
Similar techniques apply for Prometheus. There is a set of rewrite rules
that can be used to filter metrics on ingestion, and recording rules to
aggregate data.
As Matthias says, this kind of policy stuff generally better left to the
service reading the data, because it
would be easier than actual usage, since you can
>> basically multiply the reservation (whichever dimensions you choose) by the
>> run time. You could do this with Prometheus or simply by collecting the
>> data for finished containers from the Kubernetes API.
>>
>>
This sounds like a job that should happen at provisioning time, or by
config management software.
On Jan 23, 2017 10:00, "Mayank" wrote:
> Yeah, my use case is basically change the permissions of the hostPath so
> that my pods running as non root can access it. I dont want
In Kubernetes, we're moving to a SDN-style access control system, instead
of VLANs. You get better control, without the hassle of moving VLANs
around.
Take a look at Canal, the overlay network and policy control software:
https://github.com/projectcalico/canal
On Fri, Jan 20, 2017 at 11:21 PM,
Kubernetes requires all nodes that run pods to have the kubelet running to
manage the containers. This also means that everything in the cluster
needs to be part of the same overlay network.
This can be done "remotely" with overlay software like weaveworks system.
But for most practical
The latest stable release is 1.4.6[0].
Please remember that there are always features and options in the
production release that are marked Beta or Alpha depending on their quality.
Please refer to the documentation[1].
[0]: https://github.com/kubernetes/kubernetes/releases
[1]:
Load balancers for bare metal are the same as load balancers for cloud
providers. There's not really anything different.
On Mon, Nov 21, 2016 at 7:46 AM, Sandeep Srinivasa
wrote:
> Very very needed!
> I would argue that k8s is the kind of disruptor that would replace the
13 matches
Mail list logo