Hi Sebastian
Depending on the granularity you want you can deploy your routers on
different nodes, group your routes according to the IPs you want to provide
access from and configure IPtables accordingly.
For a finer, app specific control you may want to look at network policies
but they are
Hi Lorenz
it seems that inter-pod anti-affinity has been introduced in Kubernetes 1.4
(alpha) [1]. For your scenario you may however rely on setting appropriate
resource requests for your pods. The scheduler takes them per default in
consideration and won't schedule app2/pod2 on the same node as
nks!
>
> i was looking for an openshift flag/option instead of directly docker :/
>
>
>
>
> El 17 nov 2016, a las 12:39, Frederic Giloux <fgil...@redhat.com>
> escribió:
>
> Hi Julio
>
> I hope I understand your question correctly. The first time docker is
&
Hi Den,
you may need internet connectivity. Public IPs is not a requirement for
that (confer proxy and NAT). Another option is to install OpenShift
disconnected. See:
https://docs.openshift.com/container-platform/3.3/install_config/install/disconnected_install.html
.
Also, editing etc/hosts is
; So main goal: translations of routes through router should stay in the
> private network.
> Is that possible?
>
>
> Thanks
> --
> *Van:* Frederic Giloux <fgil...@redhat.com>
> *Verzonden:* donderdag 8 december 2016 13:35:12
> *Aan:* Den Cowb
Hi Krzysztof
there is the option "--period-seconds" with "oc set probe", confer: "oc set
probe --help".
Regards,
Frédéric
On Wed, Dec 7, 2016 at 11:12 AM, Sobkowiak Krzysztof <
krzys.sobkow...@gmail.com> wrote:
> Hi
>
> Is it possible to set the frequency the liveness/readiness probes are
>
as Clayton wrote:
- create a service account [1]
- get its token: oc sa get-token
- log in with the token from your script: oc login
--token=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
You can use a vault and have your script retrieve the token from it but
that's outside the scope of OpenShift.
[1]
Hi Henryk
if I understand you properly the easiest way is to create an additional
layer to your image with a Dockerfile. In this Dockerfile, from will point
to your original image, you can then switch to root user, do a chmod +w to
the directory you want to access (or all of them) and switch back
find /foo -type d -exec chmod g+x {} + RUN
>
> Am I right?
>
> So yeah, this approach works, but I was wondering if I can do something
> without creating new image. Something like:
>
> oc new-app myimage --screw-that-filesystem-write-limits=true
>
> Thanks!
>
> wt.,
Hi Henryk
If I correctly understand your use case I think that the easiest way is to
create an imagestream foo and to use the pull-through feature:
https://docs.openshift.org/latest/install_config/registry/extended_registry_configuration.html#middleware-repository-pullthrough
@example.org
> Registry server Password: <>
> error: build error: Failed to push image: Put
> http://172.30.123.59:5000/v1/repositories/testshared/d/: dial tcp
> 172.30.123.59:5000: getsockopt: no route to host
>
>
> 2017-06-21 7:31 GMT+02:00 Frederic Giloux <fgil..
Hi Lukasz,
this is not an unusual setup. You will need:
- the SDN port: 4789 UDP (both directions: masters/nodes to nodes)
- the kubelet port: 10250 TCP (masters to nodes)
- the DNS port: 8053 TCP/UDP (nodes to masters)
If you can't reach VLAN b pods from VLAN A the issue is probably with the
SDN
.2.0/24
> rh71-os3.example.com rh71-os3.example.com 192.168.122.202
> 10.1.0.0/24
>
> and from the first shot I've noticed wrong IP addresses.
>
>
> I've re-run the playbook and everything is working like a charm. Thx a
> lot for your help.
>
> Best rega
Hi Andrew
as per Gaurav's email you are missing: oc new-project common. The project /
namespace named "common" has not been created.
Regards,
Frédéric
On Thu, Jun 22, 2017 at 9:50 AM, Andrew Lau wrote:
> Was there an extra step you used before
>
> oc policy
True, you first need to manually create the policybinding so that it can ge
amended by the command:
oc create policybinding common -n common
On Thu, Jun 22, 2017 at 10:23 AM, Andrew Lau <and...@andrewklau.com> wrote:
> The namespace "common" does exist.
>
> On Thu, 22 J
Hi Lionel
yes these IPs are not exposed to the outside (S-NAT with the node IP
address). If you are not calling external services with conflicting IP
addresses you are fine.
Regards,
Frédéric
On Mon, Oct 16, 2017 at 11:40 AM, Lionel Orellana
wrote:
> Hi,
>
>
> Can two
made in the config file has been taken into
> consideration)?
>
> Thanks,
> Dave.
>
> On 05/04/2018 12:43 AM, Frederic Giloux wrote:
> > Hi Dave,
> >
> > you can change the suffix at the mater level (you will need to restart
> > the master services):
> >
ew routes, but not
> old routes - correct?
>
> Thanks,
> Dave.
>
> On 05/04/2018 12:52 PM, Frederic Giloux wrote:
> > Hi Dave,
> >
> > the variable was renamed with 3.2:
> > https://docs.openshift.com/enterprise/3.2/install_config/
> revhistory_install_config.html#t
Hi Jason,
not sure if that's what you are after but you could interrogate the master
API from inside the container:
curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt https://
$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/version/openshift
Regards,
Frédéric
On Fri, Apr 27,
Having the container running in OpenShift with a user ID from a higher
range is a security good practice. The user ID won't match any user ID on
the host. As an alternative to what Tim proposed you could modify your
docker image and allow access to root group (0) to the necessary files. The
user
Hi Karl,
OpenShift does not differentiate between post and get. Also 405 is a server
error: "the request method is known by the server but has been disabled and
cannot be used". The issue is likely to be at the level of your application
providing the REST API. To validate it you could log into a
Hi Brian
If you want your script to be executed by new builds it should be named
assemble. It can then call the original assemble script, that you may have
renamed, a python programm or any other thing you need. The run script is
called when the final container is launched not during the build.
/reference-architecture/day2ops/scripts/project_export.sh
Regards,
Frédéric
On Thu, May 31, 2018 at 11:26 PM, Brian Keyes wrote:
> thanks !!!
>
>
> is there a script to backup every single project in the entire openshift?
>
> thanks again!
>
> On Thu, May 31, 2018 at 4:23 PM,
Hi Brian
this is a bit more detailed here, including for instance the caveat in
regard of dc and imagestreams:
https://docs.openshift.com/container-platform/3.7/day_two_guide/project_level_tasks.html
Regards,
Frédéric
On Thu, May 31, 2018 at 6:10 PM, Brian Keyes wrote:
>
token, but not using python xD
> i get a 401 with long token, but i i use the short one that oc login gives
> works xD
>
>
>
>
> El 20 oct 2017, a las 8:59, Frederic Giloux <fgil...@redhat.com> escribió:
>
> Julio,
>
> have you tried the command with hig
Hi Julio,
Could you copy the commands you have used?
Regards,
Frédéric
On 19 Oct 2017 11:43, "Julio Saura" wrote:
> Hello
>
> i am trying to create a sa for accessing rest api with token ..
>
> i have followed the doc steps
>
> creating the account, applying admin role to
"kind": "Status",
> "apiVersion": "v1",
> "metadata": {},
> "status": "Failure",
> "message": "User \"system:serviceaccount:project1:icinga\" cannot list
> replicationcontrollers in
nforma a quien lo reciba
> por error, que la información contenida en el mismo es reservada y su uso
> no autorizado está prohibido legalmente, por lo que en tal caso te rogamos
> que nos lo comuniques vía e-mail o teléfono, te abstengas de realizar
> copias del mensaje o remitirlo o entr
en el mismo es reservada y su uso
> no autorizado está prohibido legalmente, por lo que en tal caso te rogamos
> que nos lo comuniques vía e-mail o teléfono, te abstengas de realizar
> copias del mensaje o remitirlo o entregarlo a terceras personas y procedas
> a devolverlo a s
> < Date: Fri, 20 Oct 2017 06:28:52 GMT
> < Content-Length: 295
> {
> "kind": "Status",
> "apiVersion": "v1",
> "metadata": {},
> "status": "Failure",
> "message": "User \"s
Hi Charles,
this limit can be changed in the node configuration, see:
https://docs.openshift.com/container-platform/3.7/admin_guide/manage_nodes.html#admin-guide-max-pods-per-node
Looking at
https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md#configuration
it seems that the
Hi Srinivas,
here are a couple of scenarios where I find setting limits useful:
- When I do performance tests and want to compare results between runs,
setting CPU limits=CPU requests give me confidence that the CPU cycles
available between the runs were more or less the same. If you don't set a
Hi Patrick
on RHEL 7.4 (and probably CENTOS 7.4) look at "man
container-storage-setup". Similar to docker-storage-setup it is a helper
script to configure the storage used by CRI-O.
In regard of the procedure I will second what Louis said:
- first prepare your nodes, including storage
- install
lices first before CPU scheduling pod A since it doesn’t have limits?
>
>
>
>
>
> --
>
> *Srinivas Kotaru*
>
> *From: *Frederic Giloux <fgil...@redhat.com>
> *Date: *Thursday, March 22, 2018 at 9:22 AM
> *To: *Srinivas Naga Kotaru <skot...@cisco.com&g
In the previous example we looked at setting a limit at the pod having the
lower request but you may rather want it for the pod having the higher
request. In this extreme scenario (node with 32 cores) pod A was gaining 21
cores on top of its request where pod B only 3 when no limit was set. You
Hi Pavel,
I think that the recommendation with 3.11 is to use the Prometheus Operator
[1]. By correctly labelling the service Prometheus is able to discover and
monitor the endpoints exposed by all your pods backing the service. More
information in the original post [2].
Regards,
Frédéric
[1]
Hi Alan
you can use a storage class for the purpose [1] and pair it with quotas for
the defined storage class [2] as proposed by Samuel.
[1]
https://docs.okd.io/3.11/install_config/storage_examples/storage_classes_legacy.html#install-config-storage-examples-storage-classes-legacy
[2]
37 matches
Mail list logo