Re: openshift memory requirements

2017-05-23 Thread Louis Santillan
If the machine is an i7 it likely only has 4 cores/threads total.  3 VM x 2
cores = 6 cores required.  Also, instead of having 3 VMs with at least one
IO controller and a NIC each, you have 3 VMs sharing 1 IO controller and 1
NIC.  Not as fun as it sounds.

My lab machine is a HP z820 with 32 cores and 128GB of RAM.  When I spin up
a SmartStart style 3x3x3 cluster + 1 bastion host + 1 NFS host (3 masters,
3 infra, 3 app nodes + 2 support hosts), my IO controller can't keep up.  I
might as well be running on some 4 year old cellphone.  Which reminds me, I
need to move those VMs over to the storage attached to my RAID controller.

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Tue, May 23, 2017 at 11:13 AM, Hetz Ben Hamo  wrote:

> Well, I installed (through the Ansible install) the origin 1.5 on 3 VM's,
> each of them had 4 GB RAM, the nodes had 2 cores.I also enabled the metrics.
>
> The entire system barely responded (this is an i7 machine with 16GB RAM).
> Thats why I asked if there are any changes that I need to add through
> Ansible to make the system work well with such amount of RAM.
>
> תודה,
> *חץ בן חמו*
> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי שלי
> 
>
> On Tue, May 23, 2017 at 9:09 PM, Clayton Coleman 
> wrote:
>
>> OpenShift at that scale probably requires 300-500M on the master and
>> 100-200M at the nodes at that scale.
>>
>> On Tue, May 23, 2017 at 2:02 PM, Hetz Ben Hamo  wrote:
>>
>>> I thought about 2 nodes with 4GB (and 2 coers) at the beginning and add
>>> nodes as needed.
>>> Number of pods - thought about using 6 (2 wordpress, 2 mysql, 2 memcache)
>>>
>>> תודה,
>>> *חץ בן חמו*
>>> אתם מוזמנים לבקר בבלוג היעוץ  או בבלוג הפרטי
>>> שלי 
>>>
>>> On Tue, May 23, 2017 at 8:59 PM, Clayton Coleman 
>>> wrote:
>>>
 How many nodes and pods are you planning to run?

 On Tue, May 23, 2017 at 1:43 PM, Hetz Ben Hamo  wrote:

> Hi,
>
> I've read the docs about openshift memory requirements and I wanted to
> ask something..
>
> I'm planning to build a system which will host a web site (wordpress
> based, for example) which will auto-scale based on the number of 
> visitors..
>
> According to the docs in the link (https://docs.openshift.com/co
> ntainer-platform/3.5/install_config/install/prerequisites.html) it
> requires 16GB RAM (it's used to be 8GB in 3.0). It doesn't mention the
> amount of ram needed for the infra nodes in the docs (I assume another 
> 8GB?)
>
> My question: is there any way to build a system with much less ram?
> something like 4GB for Master, 4GB per node (minimum 2 nodes)? if so, what
> configurations should I add to my ansible host file?
>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>

>>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Possible bug with haproxy?

2017-05-26 Thread Louis Santillan
On the internal consultant chat server, it's come up several times and some
customers are using tcp backend mode (vs. http or https) on their routers
to support http/2.

As for OP's situation, more details are needed about the OAuth client
config and DNS settings.

Is DNS set per the documentation?  The VIP for *.apps.example.com points to
the load balancer(s), or, are you using DNS round robin?

It sounds like there is some sort of a DNS confusion going on, really.

Otherwise, verify that the OAuth authorize, token (and possibly userInfo)
endpoints in the OAuth messages point to where you're expecting [0].

[0]
https://docs.openshift.org/latest/install_config/configuring_authentication.html#OpenID



---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Fri, May 26, 2017 at 1:24 PM, Clayton Coleman 
wrote:

> HAProxy doesn't currently support HTTP/2 connections - so unless you've
> done something custom, you shouldn't even be able to connect to HAProxy as
> http/2
>
> On Fri, May 26, 2017 at 4:10 PM, Philippe Lafoucrière <
> philippe.lafoucri...@tech-angels.com> wrote:
>
>> Hi, could you take a look at this please:
>> https://stackoverflow.com/questions/44162263/request-cached-
>> when-using-http-2/44163462
>>
>> I wonder if the problem could come from haproxy?
>> We're using the images "openshift/origin-haproxy-router:v1.5.0"
>>
>> Thanks,
>> Philippe
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Backup of databases on OpenShift

2017-06-09 Thread Louis Santillan
My personal feeling is that I would, for at least MySQL/MariaDB &
PostgreSQL, setup replication with compression to a non-cluster hosted DB.
Preferably, your ODW/DW DB instance(s) or maybe a staging DB.  With
compression, you ship relatively small logs over the wire.

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Thu, Jun 8, 2017 at 7:46 AM, Jens Geiregat 
wrote:

> Hi,
>
> We recently set up an OpenShift Enterprise cloud and we're wondering what
> the best practices are for backing up databases running in an OpenShift
> cloud. I will focus on PostgreSQL here, but the same goes for MongoDB,
> MariaDB...
>
> - Should we rely on backups of the persistent volumes (we're using NFS)?
> This would mean assuming the on-disk state is always recoverable. Which it
> *should* be, but it does feel like a hack...
> - Should we have an admin-level oc script that filters out all running
> database containers and does some 'oc exec pg_dump ... > backup.sql' magic
> on them?
> - Should we provide some simple templates to our users that contain
> nothing but a cron script that calls pg_dump?
> ...
>
> Please share your solutions?
>
> Kind Regards,
>
>
> Jens
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: 2 questions

2017-06-14 Thread Louis Santillan
With regards to routes, the best thing to do is create a new route that
matches the DNS name you want.

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Wed, Jun 14, 2017 at 2:42 PM, Hetz Ben Hamo  wrote:

> Hi,
>
> I was wondering regarding the GUI of openshift if the following features
> are planned (or available):
>
> 1. Change the route DNS name after it has been already created (I think
> it's available on the hosted openshift, but not on origin).
> 2. A simpler way to authenticate to GIT based solution (Gitlabs, internal
> Git server). A simpler I mean a way to click a checkbox to select
> authentication method and type user,pass.
>
> Thanks
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-19 Thread Louis Santillan
The default user for any request is `system:anonymous` a user is not logged
in or a valid token is not found.  Depending on your cluster, this usually
has almost no access (less than `system:authenticated`).  Maybe an RFE is
order (oc could suggest logging in if request is unsuccessful and the found
user happens to be `system:anonymous`).

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Mon, Jun 19, 2017 at 12:20 PM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

> Hi,
>
> I think I have hit a bug (or a lack of warning message) with `oc whoami
> -t`: I tried to login on our registry, and only got "unauthorized:
> authentication required" responses. After a couple of tries, I launched `oc
> whoami` without -t:
> "error: You must be logged in to the server (the server has asked for the
> client to provide credentials)"
> The server was probably returning a token for an anonymous user, but this
> is a bit disturbing :)
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-20 Thread Louis Santillan
The `oc` command always looks for the current session in `~/.kube/config`.
It doesn't know if a session is expired or not since session timeouts are
configurable and could have changed since the last API call was made to the
master(s).  You can run your `oc` commands to with `--loglevel=8` to see
this interaction play out.

You could also run your command like so (in bash):

$ ocx () { oc whoami && oc $@ || echo "ERROR: You may not be logged in!" ; }
$ ocx get pods -o wide

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST <https://www.redhat.com/>

lpsan...@gmail.comM: 3236334854
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Tue, Jun 20, 2017 at 6:51 AM, Philippe Lafoucrière <
philippe.lafoucri...@tech-angels.com> wrote:

>
> On Mon, Jun 19, 2017 at 4:56 PM, Louis Santillan 
> wrote:
>
>> The default user for any request is `system:anonymous` a user is not
>> logged in or a valid token is not found.  Depending on your cluster, this
>> usually has almost no access (less than `system:authenticated`).  Maybe an
>> RFE is order (oc could suggest logging in if request is unsuccessful and
>> the found user happens to be `system:anonymous`).
>
>
> That's what I suspect, but when I'm logged, I expect the token to be mine.
> In this particular case, the session had expired, and nothing warned that
> the issued token was for `system:anonymous` instead of me.
>
> Thanks,
> Philippe
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-20 Thread Louis Santillan
$ ocx () { oc project 2&>/dev/null && oc $@ || echo "ERROR: You may not be
logged in!" ; }
$ ocx get pods -o wide

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST <https://www.redhat.com/>

lpsan...@gmail.comM: 3236334854
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Tue, Jun 20, 2017 at 11:34 AM, Jordan Liggitt 
wrote:

> `oc whoami -t` doesn't talk to the server at all... it just prints your
> current session's token
>
>
> On Tue, Jun 20, 2017 at 2:31 PM, Louis Santillan 
> wrote:
>
>> The `oc` command always looks for the current session in
>> `~/.kube/config`.  It doesn't know if a session is expired or not since
>> session timeouts are configurable and could have changed since the last API
>> call was made to the master(s).  You can run your `oc` commands to with
>> `--loglevel=8` to see this interaction play out.
>>
>> You could also run your command like so (in bash):
>>
>> $ ocx () { oc whoami && oc $@ || echo "ERROR: You may not be logged in!"
>> ; }
>> $ ocx get pods -o wide
>>
>> ---
>>
>> LOUIS P. SANTILLAN
>>
>> SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS
>>
>> Red Hat Consulting, NA US WEST <https://www.redhat.com/>
>>
>> lpsan...@gmail.comM: 3236334854
>> <https://red.ht/sig>
>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>
>> On Tue, Jun 20, 2017 at 6:51 AM, Philippe Lafoucrière <
>> philippe.lafoucri...@tech-angels.com> wrote:
>>
>>>
>>> On Mon, Jun 19, 2017 at 4:56 PM, Louis Santillan 
>>> wrote:
>>>
>>>> The default user for any request is `system:anonymous` a user is not
>>>> logged in or a valid token is not found.  Depending on your cluster, this
>>>> usually has almost no access (less than `system:authenticated`).  Maybe an
>>>> RFE is order (oc could suggest logging in if request is unsuccessful and
>>>> the found user happens to be `system:anonymous`).
>>>
>>>
>>> That's what I suspect, but when I'm logged, I expect the token to be
>>> mine.
>>> In this particular case, the session had expired, and nothing warned
>>> that the issued token was for `system:anonymous` instead of me.
>>>
>>> Thanks,
>>> Philippe
>>>
>>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: oc whoami bug?

2017-06-20 Thread Louis Santillan
Whoops.  Hit the Send button early.

$ ocx () { ( oc project >/dev/null 2>&1 ) && oc $@ || echo "ERROR: You may
not be logged in!" ; }

$ ocx get pods -o wide

---

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST <https://www.redhat.com/>

lpsan...@gmail.comM: 3236334854
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Tue, Jun 20, 2017 at 3:46 PM, Louis Santillan 
wrote:

> $ ocx () { oc project 2&>/dev/null && oc $@ || echo "ERROR: You may not be
> logged in!" ; }
> $ ocx get pods -o wide
>
> ---
>
> LOUIS P. SANTILLAN
>
> SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS
>
> Red Hat Consulting, NA US WEST <https://www.redhat.com/>
>
> lpsan...@gmail.comM: 3236334854
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
> On Tue, Jun 20, 2017 at 11:34 AM, Jordan Liggitt 
> wrote:
>
>> `oc whoami -t` doesn't talk to the server at all... it just prints your
>> current session's token
>>
>>
>> On Tue, Jun 20, 2017 at 2:31 PM, Louis Santillan 
>> wrote:
>>
>>> The `oc` command always looks for the current session in
>>> `~/.kube/config`.  It doesn't know if a session is expired or not since
>>> session timeouts are configurable and could have changed since the last API
>>> call was made to the master(s).  You can run your `oc` commands to with
>>> `--loglevel=8` to see this interaction play out.
>>>
>>> You could also run your command like so (in bash):
>>>
>>> $ ocx () { oc whoami && oc $@ || echo "ERROR: You may not be logged in!"
>>> ; }
>>> $ ocx get pods -o wide
>>>
>>> ---
>>>
>>> LOUIS P. SANTILLAN
>>>
>>> SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS
>>>
>>> Red Hat Consulting, NA US WEST <https://www.redhat.com/>
>>>
>>> lpsan...@gmail.comM: 3236334854
>>> <https://red.ht/sig>
>>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>>
>>> On Tue, Jun 20, 2017 at 6:51 AM, Philippe Lafoucrière <
>>> philippe.lafoucri...@tech-angels.com> wrote:
>>>
>>>>
>>>> On Mon, Jun 19, 2017 at 4:56 PM, Louis Santillan 
>>>> wrote:
>>>>
>>>>> The default user for any request is `system:anonymous` a user is not
>>>>> logged in or a valid token is not found.  Depending on your cluster, this
>>>>> usually has almost no access (less than `system:authenticated`).  Maybe an
>>>>> RFE is order (oc could suggest logging in if request is unsuccessful and
>>>>> the found user happens to be `system:anonymous`).
>>>>
>>>>
>>>> That's what I suspect, but when I'm logged, I expect the token to be
>>>> mine.
>>>> In this particular case, the session had expired, and nothing warned
>>>> that the issued token was for `system:anonymous` instead of me.
>>>>
>>>> Thanks,
>>>> Philippe
>>>>
>>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Configuring custom certs

2017-07-28 Thread Louis Santillan
I tend to use the ansible installer instead of `oc cluster up`, but, have
you tried following the documented procedures [0][1] and specifically, the
one for the masters [2]?  May have to add a CA run as well [3].

[0]
https://docs.openshift.org/latest/install_config/certificate_customization.html
[1]
https://docs.openshift.org/latest/install_config/redeploying_certificates.html
[2]
https://docs.openshift.org/latest/install_config/redeploying_certificates.html#redeploying-master-certificates
[3]
https://docs.openshift.org/latest/install_config/redeploying_certificates.html#redeploying-new-custom-ca

___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Fri, Jul 28, 2017 at 6:28 AM, Tim Dudgeon  wrote:

> So I found the reason whey the server wasn't starting - the certs need to
> be copied to the directory where the configurations are. I was pointing to
> them from a different location.
>
> But I'm still not able to get the custom certs working.
> If I define them in the assetConfig.ServingInfo section then the server
> starts, but the web console doesn't use them.
> If I define them in the servingInfo section (just change the certFile,
> clientCA and keyFile props) then the server doesn't start.
>
> Is there a description of what all these certificates are used for and how
> to use custom certificates anywhere?
>
> Tim
>
>
>
> On 28/07/2017 13:30, Cesar Wong wrote:
>
>> Hi Tim,
>>
>> You may want to enable additional logging by running 'oc cluster up
>> --loglevel=5 --server-loglevel=5
>>
>> If the origin container can't start, there's something wrong with the
>> master-config.yaml (could be as simple as a formatting issue)
>>
>> On Jul 28, 2017, at 6:17 AM, Tim Dudgeon  wrote:
>>>
>>> I'm trying to work out how to deploy custom certificates so that the OS
>>> console doesn't complain about untrested certs.
>>> I've obtained certificates using Let's Encrypt, so have the following
>>> files:
>>> cert.pem chain.pem fullchaim.pem privkey.pem
>>>
>>> Now I try to update my master-config.yaml to use these.
>>> I was thinking that the minimum needed would be to edit:
>>>
>>> assetConfig.ServingInfo.certFile to point to fullchain.pem
>>>
>>> assetConfig.ServingInfo.keyFile to point to privkey.pem
>>>
>>> and leave assetConfig.ServingInfo.clientCA as empty.
>>>
>>> I made no other changes.
>>>
>>> Unfortunately this does not work. oc cluster up fails badly without
>>> saying much that is useful:
>>>
>>>
>>> Starting OpenShift using openshift/origin:v3.6.0-rc.0 ...
>>> -- Checking OpenShift client ... OK
>>> -- Checking Docker client ... OK
>>> -- Checking Docker version ... OK
>>> -- Checking for existing OpenShift container ...
>>>   Deleted existing OpenShift container
>>> -- Checking for openshift/origin:v3.6.0-rc.0 image ... OK
>>> -- Checking Docker daemon configuration ... OK
>>> -- Checking for available ports ... OK
>>> -- Checking type of volume mount ...
>>>   Using nsenter mounter for OpenShift volumes
>>> -- Creating host directories ... OK
>>> -- Finding server IP ...
>>>   Using 127.0.0.1 as the server IP
>>> -- Starting OpenShift container ...
>>>   Starting OpenShift using container 'origin'
>>> FAIL
>>>   Error: could not start OpenShift container "origin"
>>>   Details:
>>>   No log available from "origin" container
>>>
>>> Any pointers to how to do this correctly?
>>>
>>> Thanks
>>> Tim
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: S2I builds not incremental?

2017-10-14 Thread Louis Santillan
Also, you may need to have support from your Base Image as well.  What I
mean is `.s2i/bin/assemble` needs to work with `.s2i/bin/save-artifacts`.
See a Spring Boot w/Gradle example here [0].

[0] https://github.com/pylejowdmn/springboot-gradle-s2i/tree/master/.s2i/bin

___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Fri, Oct 13, 2017 at 2:01 AM, Tako Schotanus  wrote:

> Yes! Thank you, Daniel :)
>
> On Fri, Oct 13, 2017 at 10:43 AM, Daniel Kučera 
> wrote:
>
>> Hi Tako.
>>
>> You need to set incremental: true
>>
>> https://docs.openshift.com/container-platform/3.4/dev_guide/
>> builds/build_strategies.html#incremental-builds
>>
>> I also spent some time on this :)
>>
>> 2017-10-13 10:39 GMT+02:00 Tako Schotanus :
>>
>>> Hi,
>>>
>>> we've been using S2I builds for RHOAR for about a month now which is a
>>> lot faster and a lot less resource intensive than using Jenkins pipeline
>>> builds, but
>>>
>>> the S2I builders are supposed to support incremental builds, and in
>>> effect when I run "s2i --incremental " manually it does indeed work,
>>> but inside OpenShift the builds are never incremental?
>>>
>>> Are we doing something wrong? How do we make OpenShift do incremental
>>> builds?
>>>
>>> Cheers,
>>>
>>> --
>>>
>>> TAKO SCHOTANUS
>>>
>>> SENIOR SOFTWARE ENGINEER
>>>
>>> Red Hat
>>>
>>> 
>>> 
>>>
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>
>>>
>>
>>
>> --
>>
>> S pozdravom / Best regards
>> Daniel Kucera.
>>
>
>
>
> --
>
> TAKO SCHOTANUS
>
> SENIOR SOFTWARE ENGINEER
>
> Red Hat
>
> 
> 
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Q: Refer to runtime properties from within "templates"?

2017-10-20 Thread Louis Santillan
Terminology: I tend to call the YAML/JSON files used in `oc apply -f ...`:
YAML Files, Object Files, Object Definition Files, API Object Files.
OpenShift also has multiple concepts of Template Files in the form of jinja
files (which produce YAML Files) and multiple API Object Files in a single
file YAML file, and parameterized API Object Files that contain a
`parameters:` section one or more API Object Definitions [0].

I find the latter especially useful.  You can create an entire Application
Lifecycle Landscape (DEV, QA, PROD) and the requisite API Objects (bc, dc,
svc, etc.) in one call to `oc process -f ... | oc appy -f -` with full
parameterization [1].  Every single API Object can be created this way,
and, you can add them to the OpenShift project namespace `openshift` and
make your templates available to other users of the cluster (using `oc
create template -f myTemplate.yaml`).

[0]
https://docs.openshift.com/container-platform/3.6/dev_guide/templates.html#writing-templates
[1]
https://docs.openshift.com/container-platform/3.6/dev_guide/templates.html#generating-a-list-of-objects


___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Thu, Oct 19, 2017 at 8:39 AM, Tako Schotanus  wrote:

> Hi,
>
> is it possible in some way to refer to actual runtime values from within
> an OpenShift template (I'm calling it templates because I don't know the
> official terminology. I'm referring to the json/yaml files that can be
> applied using "oc apply" for example).
>
> What I'm really trying to do is to figure out what hostname was used for a
> Route that was created using an earlier apply and use it in the template
> that is to be applied.
>
> I'm guessing there's no such thing, but I want to make sure before coming
> up with some kind of system ourselves to do replacements in template files.
>
> Thanks
>
> --
>
> TAKO SCHOTANUS
>
> SENIOR SOFTWARE ENGINEER
>
> Red Hat
>
> 
> 
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: OpenShift master keeps consuming lots and memory and swapping

2017-10-20 Thread Louis Santillan
Firstly, leaving swap enabled is an anti-pattern in general [0] as
OpenShift is then unable to recognize OOM conditions until performance is
thoroughly degraded.  Secondly, we generally recommend to our customers
that they have at least 20GB [1] for Masters.  I've seen many customers go
far past that to suit their comfort.


[0] https://docs.openshift.com/container-platform/3.6/admin_
guide/overcommit.html#disabling-swap-memory
[1] https://docs.openshift.com/container-platform/3.6/
install_config/install/prerequisites.html#production-
level-hardware-requirements


___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Fri, Oct 20, 2017 at 4:54 PM, Joel Pearson  wrote:

> Hi,
>
> I've got a brand new OpenShift cluster running on OpenStack and I'm
> finding that the single master that I have is struggling big time, it seems
> to consume tons of virtual memory and then start swapping and slows right
> down.
>
> It is running with 16GB of memory, 40GB disk and 2 CPUs.
>
> The cluster is fairly idle, so I don't know why the master gets this way.
> Restarting the master solves the problem for a while, for example, I
> restarted it at 10pm last night, and when I checked again this morning it
> was in the same situation.
>
> Would having multiple masters alleviate this problem?
>
> Here is a snapshot of top:
>
> [image: Inline images 1]
>
> Any advice?  I've happy to build the cluster with multiple masters if it
> will help.
>
>
> --
> Kind Regards,
>
> Joel Pearson
> Agile Digital | Senior Software Consultant
>
> Love Your Software™ | ABN 98 106 361 273
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Hard Disk is full because of OpenShift Origin

2017-10-28 Thread Louis Santillan
To run `oc adm prune` (formerly `oadm prune`) you need a real user (not an
service account, like `system:admin`) with `cluster-admin` or
`system:image-pruner` permission [0].


[0]
https://docs.openshift.com/container-platform/3.6/admin_guide/pruning_resources.html#pruning-images

___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Sat, Oct 28, 2017 at 12:41 AM, Tien Hung Nguyen  wrote:

> I have checked the memory in the PATH /mnt/var/lib/origin path and it
> says I'm using almost 10GBs there although I have deleted the images with
> the ''docker rmi [imageid]' and using 'oc cluster down' commands. I think
> that's the problem...
> Is it ok to just manually delete the origin folder in the Linux host to
> fix that problem?
>
> Furthermore I have also executed 'docker exec -it origin  bash' command
>  in order to get to the Host machine of origin and to execute oadm prune
> images command. However, it says that I'm missing a token of an admin
> account to execute it.
>
> I'm using a local OpenShift Origin 3.6 version running on Docker for Mac
> and installed via Terminal 'oc cluster up --host-data-dir
> /Users/username/oc-data' command. The directory  /Users/username/oc-data'
> on my computer occupies only 208,8MB.
>
> What's the best way to solve this problem?
>
>
> 2017-10-27 22:21 GMT+02:00 Graham Dumpleton :
>
>> A question for OP. Are you using options to oc cluster up to persist data
>> when shutting it down?
>>
>> On 27 Oct 2017, at 10:58 pm, Mauricio Améndola 
>> wrote:
>>
>> Hello,
>> The correct way to remove old images is using “oadm prune….”[1]  command.
>> I remember that there are two folders that increase a lot due to tmp files.
>>
>> - /var/lib/origin
>> - /var/lib/docker
>>
>> Try oadm prune and give some feedback
>> Regards,
>>
>> [1] https://docs.openshift.com/container-platform/3.6/admin_
>> guide/pruning_resources.html
>>
>>
>> On Oct 26, 2017, at 6:37 PM, Tien Hung Nguyen 
>> wrote:
>>
>> Hi everybody,
>>
>> I have a problem with my hard drive space. Since I'm using OpenShift
>> locally with Docker, I have the problem that my hard drive space gets full
>> very fast and I can't remove it. I have already run the commands 'oc
>> cluster down' and the 'docker rmi [imageip] commands to deleted unused
>> images but it has no effect.
>>
>> Please, could you tell me how to free up my disk space properly?
>>
>> Thank you!
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Origin 3.6 update master certificate issue

2017-10-30 Thread Louis Santillan
Try this solution [0].  It mentions metrics but the general procedure
should also work for registry-console.

[0]
https://docs.openshift.com/container-platform/3.6/admin_solutions/certificate_management.html#change-app-cert-to-ca-signed-cert

___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Mon, Oct 30, 2017 at 12:30 PM, Marcello Lorenzi 
wrote:

> Hi All,
> we tried to use the playbook /root/openshift-ansib
> le/playbooks/byo/openshift-cluster/redeploy-master-certificates.yml to
> deploy a custom certificates for the public hostname used for the master UI
> and API, but after the update via ansible the registry console doesn't work.
>
> We tried to restore the previous /etc/origin directory content on the
> master nodes and all works fine.
>
> Do we have to configure also the certificate for the registry before this
> update?
>
> Thanks,
> Marcello
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: confusion over storage for logging

2017-11-01 Thread Louis Santillan
I have an active PR for that in the Scaling Performance Section [0][1][2].

Once it lands, I plan to add more references to that section from the
Registry, Metrics, & Logging install docs.

[0] https://github.com/openshift/openshift-docs/pull/6033
[1]
https://github.com/tmorriso-rh/openshift-docs/blob/89e0641169ea9cc35c5c4adb538639aeff62e8b4/scaling_performance/optimizing_storage.adoc#general-storage-guidelines
[2]
https://github.com/tmorriso-rh/openshift-docs/blob/89e0641169ea9cc35c5c4adb538639aeff62e8b4/scaling_performance/optimizing_storage.adoc#back-end-recommendations


___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Wed, Nov 1, 2017 at 1:21 PM, Rich Megginson  wrote:

> On 11/01/2017 10:50 AM, Tim Dudgeon wrote:
>
>> I am confused over persistent storage for logging (elasticsearch).
>>
>> The latest advanced installer docs [1] specifically describes how to
>> define using NFS for persistent storage, but the docs for "aggregating
>> container logs" [2] says that NFS should not be used (except in one
>> particular scenario) and seems to suggest that the only really suitable
>> scenario is to use a volume (disk) directly mounted to each logging node.
>>
>> Could someone clarify the situtation?
>>
>
> Elasticsearch says do not use NFS. https://www.elastic.co/guide/e
> n/elasticsearch/guide/2.x/indexing-performance.html#_storage
>
> We should make that clear in the docs.
>
> Please file a doc bug.
>
>
>
>> Tim
>>
>>
>> [1] https://docs.openshift.org/latest/install_config/install/adv
>> anced_install.html#advanced-install-cluster-logging
>>
>> [2] https://docs.openshift.org/latest/install_config/aggregate_
>> logging.html#aggregated-elasticsearch
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Logging seems to be working, but no logs are collected

2017-11-02 Thread Louis Santillan
Tim,

This KCS may also be of use to you [0].

[0] https://access.redhat.com/solutions/3220401

___

LOUIS P. SANTILLAN

SENIOR CONSULTANT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, NA US WEST 

lpsan...@gmail.comM: 3236334854

TRIED. TESTED. TRUSTED. 

On Thu, Nov 2, 2017 at 9:00 AM, Rich Megginson  wrote:

> On 11/02/2017 02:01 AM, Tim Dudgeon wrote:
>
>>
>> Noriko, That fixed it.
>> There was no filter-post-z-* file and the  and 
>> tags were present.
>> After removing those tags and restarting the fluentd pods logs are
>> getting pushed to ES.
>>
>> So the question is how to avoid this problem in the first place?
>>
>>
> Upstream logging is a bit of a mess right now.
>
> Some time ago we decoupled the configuration of logging from the
> implementation.  That is, we moved all of the configuration into
> openshift-ansible.  That meant we needed to either release
> openshift-ansible packages and logging images in absolute lock-step (which
> didn't happen - in fact we never released upstream logging images for 3.6.x
> - this is now being addressed - https://github.com/openshift/o
> rigin-aggregated-logging/pull/758), or we need to ensure that
> openshift-ansible logging changes did not depend on the image version, and
> vice versa (this also didn't happen - we released changes to the logging
> images that assumed they would only ever be deployed with a specific
> version of openshift-ansible, instead of adopting a more "defensive
> programming" style).
>
>
> This was a simple ansible install using this in the inventory file:
>>
>> openshift_logging_image_version=v3.6.1
>> openshift_hosted_logging_deploy=true
>> openshift_logging_fluentd_journal_read_from_head=false
>>
>> (note, the image tag for the ES deployment currently needs to be changed
>> to :latest for ES to start, but that's a separate issue).
>>
>>
>> On 01/11/2017 21:00, Noriko Hosoi wrote:
>>
>>> On 11/01/2017 12:56 PM, Rich Megginson wrote:
>>>
 On 11/01/2017 01:18 PM, Tim Dudgeon wrote:

> More data on this.
> Just to confirm that the journal on the node is receiving events:
>
> sudo journalctl -n 25
> -- Logs begin at Wed 2017-11-01 14:24:08 UTC, end at Wed 2017-11-01
> 19:15:15 UTC. --
> Nov 01 19:14:23 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:23.286735   15148 rest.go:324] Starting watch for 
> /api/v1/configmaps,
> rv=1940 labels= fields
> Nov 01 19:14:24 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:24.288497   15148 rest.go:324] Starting watch for /api/v1/nodes,
> rv=6595 labels= fields= tim
> Nov 01 19:14:29 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:29.283528   15148 rest.go:324] Starting watch for
> /apis/extensions/v1beta1/ingresses, rv=4 l
> Nov 01 19:14:36 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:36.566696   15148 rest.go:324] Starting watch for /api/v1/pods,
> rv=6028 labels= fields= time
> Nov 01 19:14:40 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:40.284191   15148 rest.go:324] Starting watch for
> /api/v1/persistentvolumeclaims, rv=1606 la
> Nov 01 19:14:43 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:43.291205   15148 rest.go:324] Starting watch for /apis/
> authorization.openshift.io/v1/policy
> Nov 01 19:14:43 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:43.34   15148 rest.go:324] Starting watch for
> /oapi/v1/hostsubnets, rv=1054 labels= fiel
> Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101
> 19:14:47.255576   20672 operation_generator.go:609] MountVolume.SetUp
> succeeded for volume "kubernet
> Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101
> 19:14:47.256440   20672 operation_generator.go:609] MountVolume.SetUp
> succeeded for volume "kubernet
> Nov 01 19:14:47 master-1.openstacklocal origin-node[20672]: I1101
> 19:14:47.258455   20672 operation_generator.go:609] MountVolume.SetUp
> succeeded for volume "kubernet
> Nov 01 19:14:48 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:48.291988   15148 rest.go:324] Starting watch for /apis/
> authorization.openshift.io/v1/cluste
> Nov 01 19:14:51 master-1.openstacklocal sshd[46103]: Invalid user
> admin from 118.89.45.36 port 17929
> Nov 01 19:14:51 master-1.openstacklocal sshd[46103]:
> input_userauth_request: invalid user admin [preauth]
> Nov 01 19:14:52 master-1.openstacklocal sshd[46103]: Connection closed
> by 118.89.45.36 port 17929 [preauth]
> Nov 01 19:14:56 master-1.openstacklocal origin-master[15148]: I1101
> 19:14:56.206290   15148 rest.go:324] Starting watch for /api/v1/services,
> rv=2008 labels= fields=
> Nov 01 19:14:57 master-1.openstacklocal origin-master[151

Re: Using Environment Variables within context.xml - Tomcat 8 Source 2 Image

2018-01-17 Thread Louis Santillan
David,

Try adding `env.` to your variables (e.g. `${env.MAPPING_JNDI}`) [0].  You
can also verify that the vars are set the way you expect using `oc rsh ...`
or `oc debug ...` (in the case of a failed pod).

[0] https://access.redhat.com/solutions/3190862

___

LOUIS P. SANTILLAN

Architect, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 



On Wed, Jan 17, 2018 at 2:23 AM, David Gibson 
wrote:

> Hello,
>
> I was wondering if it is possible to achieve the following:
>
> We have created a Geoserver web app using the Tomcat 8 source to image
> file, however we require this app to connect to 3 external databases to
> retrieve the spatial data.
>
> To build our application we are using the Jenkins S2I and have created a
> build pipeline that will build, deploy and promote the application through
> various stages eg dev, test, prod.
>
> Using the Tomcat Source 2 Image the app has been created and the war file
> gets deployed along with the context.xml file specific for the application,
> which if we hardcode all the values in the context.xml file this will work
> for an individual environment.
>
> I have read that it was possible in OS version 2 to substitute the values
> in the context.xml file with environment variable within OS however this
> does not seem to work.
>
> What we have is
>
> context xml
>
>   url = "${MAPPING_URL}"
>
> etc.
>
>   url = "${OSMAP_URL}"
>
> In the deploy template we have these values configured as environment
> variable as such
>
> - name: "MAPPING_JNDI"
>   value: ${MAPPING_JNDI}
>
> where the values are read in from a properties file.
>
> If I use the terminal to inspect the pod I can see that the environment
> variable are all set correctly, however the JNDI lookup fails as the values
> have not been substituted. Is it possible to do this.
>
> Thanks,
>
> David
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Using Environment Variables within context.xml - Tomcat 8 Source 2 Image

2018-01-18 Thread Louis Santillan
Which base image are you using?

___

LOUIS P. SANTILLAN

Architect, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, <https://www.redhat.com/> Container and PaaS Practice

lsant...@redhat.com   M: 3236334854
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>



On Thu, Jan 18, 2018 at 12:41 AM, David Gibson 
wrote:

> Louis,
>
> Thanks for the response, I have tried prefixing the variables with env.
> but it had the same result. Also checked to make sure the pod had picked up
> the correct environment variable and all variables had been set as expected.
>
> David
>
>
> On Thursday, 18 January 2018, 02:22:24 GMT, Louis Santillan <
> lsant...@redhat.com> wrote:
>
>
> David,
>
> Try adding `env.` to your variables (e.g. `${env.MAPPING_JNDI}`) [0].  You
> can also verify that the vars are set the way you expect using `oc rsh ...`
> or `oc debug ...` (in the case of a failed pod).
>
> [0] https://access.redhat.com/solutions/3190862
>
> ___
>
> LOUIS P. SANTILLAN
>
> Architect, OPENSHIFT, MIDDLEWARE & DEVOPS
>
> Red Hat Consulting, <https://www.redhat.com/> Container and PaaS Practice
>
> lsant...@redhat.com   M: 3236334854
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
>
>
> On Wed, Jan 17, 2018 at 2:23 AM, David Gibson 
> wrote:
>
> Hello,
>
> I was wondering if it is possible to achieve the following:
>
> We have created a Geoserver web app using the Tomcat 8 source to image
> file, however we require this app to connect to 3 external databases to
> retrieve the spatial data.
>
> To build our application we are using the Jenkins S2I and have created a
> build pipeline that will build, deploy and promote the application through
> various stages eg dev, test, prod.
>
> Using the Tomcat Source 2 Image the app has been created and the war file
> gets deployed along with the context.xml file specific for the application,
> which if we hardcode all the values in the context.xml file this will work
> for an individual environment.
>
> I have read that it was possible in OS version 2 to substitute the values
> in the context.xml file with environment variable within OS however this
> does not seem to work.
>
> What we have is
>
> context xml
>
>   url = "${MAPPING_URL}"
>
> etc.
>
>   url = "${OSMAP_URL}"
>
> In the deploy template we have these values configured as environment
> variable as such
>
> - name: "MAPPING_JNDI"
>   value: ${MAPPING_JNDI}
>
> where the values are read in from a properties file.
>
> If I use the terminal to inspect the pod I can see that the environment
> variable are all set correctly, however the JNDI lookup fails as the values
> have not been substituted. Is it possible to do this.
>
> Thanks,
>
> David
>
>
> __ _
> users mailing list
> users@lists.openshift.redhat. com 
> http://lists.openshift.redhat. com/openshiftmm/listinfo/users
> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
>
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Using Environment Variables within context.xml - Tomcat 8 Source 2 Image

2018-01-18 Thread Louis Santillan
Oh, can you also provide your directory structure and the commands you're
using to build your app image?

If you're using a template DC, and you have specified
- name: "MAPPING_JNDI"
  value: ${MAPPING_JNDI}
in the `env` section, then, those environment variables are not coming from
your properties file (at least initially).  They're coming from what you
specify in your `oc process ... | oc apply ...` step [0][1].


[0]
https://docs.openshift.com/container-platform/3.7/dev_guide/templates.html#generating-a-list-of-objects
[1]
https://access.redhat.com/documentation/en-us/red_hat_jboss_middleware_for_openshift/3/html-single/red_hat_jboss_web_server_for_openshift/index#using_the_jws_for_openshift_image_source_to_image_s2i_process



___

LOUIS P. SANTILLAN

Architect, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting, <https://www.redhat.com/> Container and PaaS Practice

lsant...@redhat.com   M: 3236334854
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>



On Thu, Jan 18, 2018 at 9:32 PM, Louis Santillan 
wrote:

> Which base image are you using?
>
> ___
>
> LOUIS P. SANTILLAN
>
> Architect, OPENSHIFT, MIDDLEWARE & DEVOPS
>
> Red Hat Consulting, <https://www.redhat.com/> Container and PaaS Practice
>
> lsant...@redhat.com   M: 3236334854
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>
>
>
> On Thu, Jan 18, 2018 at 12:41 AM, David Gibson 
> wrote:
>
>> Louis,
>>
>> Thanks for the response, I have tried prefixing the variables with env.
>> but it had the same result. Also checked to make sure the pod had picked up
>> the correct environment variable and all variables had been set as expected.
>>
>> David
>>
>>
>> On Thursday, 18 January 2018, 02:22:24 GMT, Louis Santillan <
>> lsant...@redhat.com> wrote:
>>
>>
>> David,
>>
>> Try adding `env.` to your variables (e.g. `${env.MAPPING_JNDI}`) [0].
>> You can also verify that the vars are set the way you expect using `oc rsh
>> ...` or `oc debug ...` (in the case of a failed pod).
>>
>> [0] https://access.redhat.com/solutions/3190862
>>
>> ___
>>
>> LOUIS P. SANTILLAN
>>
>> Architect, OPENSHIFT, MIDDLEWARE & DEVOPS
>>
>> Red Hat Consulting, <https://www.redhat.com/> Container and PaaS Practice
>>
>> lsant...@redhat.com   M: 3236334854
>> <https://red.ht/sig>
>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>
>>
>>
>> On Wed, Jan 17, 2018 at 2:23 AM, David Gibson 
>> wrote:
>>
>> Hello,
>>
>> I was wondering if it is possible to achieve the following:
>>
>> We have created a Geoserver web app using the Tomcat 8 source to image
>> file, however we require this app to connect to 3 external databases to
>> retrieve the spatial data.
>>
>> To build our application we are using the Jenkins S2I and have created a
>> build pipeline that will build, deploy and promote the application through
>> various stages eg dev, test, prod.
>>
>> Using the Tomcat Source 2 Image the app has been created and the war file
>> gets deployed along with the context.xml file specific for the application,
>> which if we hardcode all the values in the context.xml file this will work
>> for an individual environment.
>>
>> I have read that it was possible in OS version 2 to substitute the values
>> in the context.xml file with environment variable within OS however this
>> does not seem to work.
>>
>> What we have is
>>
>> context xml
>>
>> >  url = "${MAPPING_URL}"
>>
>> etc.
>>
>> >  url = "${OSMAP_URL}"
>>
>> In the deploy template we have these values configured as environment
>> variable as such
>>
>> - name: "MAPPING_JNDI"
>>   value: ${MAPPING_JNDI}
>>
>> where the values are read in from a properties file.
>>
>> If I use the terminal to inspect the pod I can see that the environment
>> variable are all set correctly, however the JNDI lookup fails as the values
>> have not been substituted. Is it possible to do this.
>>
>> Thanks,
>>
>> David
>>
>>
>> __ _
>> users mailing list
>> users@lists.openshift.redhat. com 
>> http://lists.openshift.redhat. com/openshiftmm/listinfo/users
>> <http://lists.openshift.redhat.com/openshiftmm/listinfo/users>
>>
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Help using ImageStreams, DCs and ImagePullSecrets templates with a GitLab private registry (v3.6)

2018-01-19 Thread Louis Santillan
Gaurav, Alan,

What is the full (redact if necessary for artifactory) output of `curl -kv
https:///v2//`?

I get the following headers when I naively hit `
https://registry.gitlab.com/v2/myproject/myimage/manifests/latest`

   1. Content-Length:
   160
   2. Content-Type:
   application/json; charset=utf-8
   3. Date:
   Fri, 19 Jan 2018 07:58:26 GMT
   4. Docker-Distribution-Api-Version:
   registry/2.0
   5. Www-Authenticate:
   Bearer realm="https://gitlab.com/jwt/auth
   ",service="container_registry",scope="repository:myproject/myimage:pull"
   6. X-Content-Type-Options:
   nosniff

Looks like `https://gitlab.com/jwt/auth` is the auth URL Maciej is speaking
of.

The docs also mention having to `link` the secret to the namespace's
`:default` service account for pod image pulling [0].  There's a step or
two extra there that Maciej had not yet mentioned.

[0]
https://docs.openshift.com/container-platform/3.7/dev_guide/managing_images.html#allowing-pods-to-reference-images-from-other-secured-registries

___

LOUIS P. SANTILLAN

Architect, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 



On Thu, Jan 18, 2018 at 2:01 PM, Gaurav P  wrote:

> Maciej,
>
> I have a similar problem, however with a private authenticated Artifactory
> registry fronted by haproxy.
>
> Tried the curl you suggested, but the WWW-Authenticate header in the
> response only contains 'Basic realm="Artifactory Realm"'.
>
> Struggling to find what that 2nd url should be.
>
> - Gaurav
>
> On Mon, Jan 8, 2018 at 6:20 AM, Maciej Szulik  wrote:
>
>> In short, there are two possible use-cases here.
>>
>> The first, in which the authorization is performed under the same URL as
>> the pull:
>>
>> 1. IS stays the same, no need to modify anything.
>> 2. Create a secret, eg:
>> oc secrets new-dockercfg  \
>>--docker-server= \
>>--docker-username= \
>>--docker-password= \
>>--docker-email=
>>
>> 3. Re-run the import:
>>   oc import-image 
>>
>>
>> The second, in which authorization is delegated to a different URL:
>> 1. IS stays the same, no need to modify anything.
>> 2. Create a secret as previously.
>> 3. Create a 2nd secret again the authorization url. You can get it by
>> trying to curl the image
>>data, eg. curl -v https:///v2//
>> in return you should
>>see the HTTP/1.1 401 Unauthorized with information where to
>> authenticate, eg:
>>WWW-Authenticate: Bearer realm="",service="docker-registry"
>>use that auth URL for docker-server when creating the second secret.
>> 4. Re-run import.
>>
>> Hope that helps,
>> Maciej
>>
>>
>>
>>
>>
>> On Thu, Jan 4, 2018 at 2:53 PM, Alan Christie <
>> achris...@informaticsmatters.com> wrote:
>>
>>> Thanks for your guidance so far Maciej but none of this is working for
>>> me. [1] doesn’t really help as I’m past that and, sadly the 1,500 lines and
>>> numerous of posts in issue 9584 [2] are exhausting to trawl though and
>>> still leave me with an inability to pull from GitLab using an image stream.
>>>
>>> Again, I have a working DC/IPS solution. I understand secrets, DCs and
>>> IPS but I still cannot get ImageStreams to work. I just get…
>>>
>>> *Internal error occurred: Get https://registry.gitlab.com/v2/myproject/
>>> myimage.manifests/latest: denied:
>>> access forbidden.*
>>>
>>> I’m just about exhausted.
>>>
>>> So, if my setup is:
>>>
>>>- *OpenShift 3.6.1*
>>>- An image that's: *myproject/myimage:latest*
>>>- A registry that’s: *registry.gitlab.com
>>>*
>>>- A pull secret that works for DC/IPS - i.e. I can pull the image
>>>from the private repo with my DC and the installed secret.
>>>
>>> What...
>>>
>>>- would my *ImageStream* yaml template or json look like?
>>>- would I need to change in my working DC yaml?
>>>- if any, are the crucial roles my OC user needs?
>>>
>>>
>>> On 3 Jan 2018, at 11:03, Maciej Szulik  wrote:
>>>
>>> Have a look at [1] which should explain how to connect the IS with the
>>> secret. Additionally,
>>> there's [2] which explains problems when auth is delegated to a
>>> different uri.
>>>
>>> Maciej
>>>
>>>
>>> [1] https://docs.openshift.org/latest/dev_guide/managing_images.
>>> html#private-registries
>>> [2] https://github.com/openshift/origin/issues/9584
>>>
>>> On Wed, Jan 3, 2018 at 10:34 AM, Alan Christie <
>>> achris...@informaticsmatters.com> wrote:
>>>
 Hi all,

 I’m successfully using a DeploymentConfig (DC) and an ImagePullSecret
 (IPS) templates with OpenShift Origin v3.6 to spin-up my application from a
 container image hosted on a private GitLab registry. But I want the
 deployment to re-deploy when the GitLab image changes and to do this I
 believe I need to employ 

Re: Adding host storage during advanced installation

2018-03-20 Thread Louis Santillan
Patrick,

First, I think you're misunderstanding the order of operations here just a
bit.  `docker-storage-setup` is part of the host preparation steps and
happens on all nodes in the cluster before running the Advanced Installer.
The Container and PaaS Practice's Consulting Playbooks [0] might help your
understanding.

I'm not fully intimate with CRI-O's internals but It looks like there are
some options to set [1].  I don't see any further details on GH so I
suspect those details are forthcoming.

[0] http://v1.uncontained.io/playbooks/installation/
[1]
https://github.com/kubernetes-incubator/cri-o/blob/master/docs/crio.conf.5.md

___

LOUIS P. SANTILLAN

Architect, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 



On Tue, Mar 20, 2018 at 7:53 AM, Patrick Hemmer 
wrote:

> How are you supposed to add additional storage to the host when using the
> advanced installation procedure (the openshift-ansible repo)?
>
> The documentation (https://docs.openshift.com/container-platform/3.9/
> install_config/install/host_preparation.html#configuring-docker-storage)
> says that you have to use this "docker-storage-setup" script, which gets
> installed during the ansible run. So it seems like ansible would need to
> pause after installing the script so the user can go run it. But this is
> just wrong as that's not how ansible should be used.
>
> Additionally the naming of that script sounds like it's docker specific.
> How should one handle CRI-O storage? Or does that not need anything special
> on top of just having a big /var ?
>
> -Patrick
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Some frustrations with OpenShift

2018-03-27 Thread Louis Santillan
On Mon, Mar 26, 2018 at 1:15 PM, Clayton Coleman 
wrote:

>
>
> On Mon, Mar 26, 2018 at 11:50 AM, Alfredo Palhares 
> wrote:
>
>> Hello everyone,
>>
>>
>> I would like to share some of the frustations that I currently have with
>> openshift, which is making me not consider this a base to our container
>> infrastcture.
>> - No visualization of the cluster out of the box
>>
>
> Generally that has been a responsibility of CloudForms / manageiq.  What
> sorts of visualizations are you looking for?
>

Cockpit is also available as a minimalist's cluster visualizer [0].  If you
need help installing, try the Consulting Playbooks [1].


[0]
https://github.com/openshift/openshift-ansible/blob/release-3.7/inventory/byo/hosts.example#L295-L299
[1] http://v1.uncontained.io/playbooks/installation/

___

LOUIS P. SANTILLAN

ARCHITECT, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Not able to route to services

2018-03-27 Thread Louis Santillan
Isn't the default port for your Registry 5000? Try `curl -kv
https://docker-registry.default.svc:5000/healthz`
 [0][1].

[0] https://access.redhat.com/solutions/1616953#health
[1]
https://docs.openshift.com/container-platform/3.7/install_config/registry/accessing_registry.html#accessing-registry-metrics

___

LOUIS P. SANTILLAN

Architect, OPENSHIFT, MIDDLEWARE & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 



On Tue, Mar 27, 2018 at 6:39 AM, Tim Dudgeon  wrote:

> Something strange has happened in my environment which has resulted in not
> being able to route to any of the services.
> Earlier this was all working fine. The install was done using the ansible
> installer and this is happening with 3.6.1 and 3.7.1.
> The services are all there are running fine, and DNS is working, but I
> can't reach them. e.g. from the master node:
>
> $ host docker-registry.default.svc
> docker-registry.default.svc.cluster.local has address 172.30.243.173
> $ curl -k https://docker-registry.default.svc/healthz
> curl: (7) Failed connect to docker-registry.default.svc:443; No route to
> host
>
> Any ideas on how to work out what's gone wrong?
>
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Where to mount the NFS volume

2018-10-04 Thread Louis Santillan
Read [0].  it describes how to create a PV from NFS.  You should also be
able to add storage from the web console.

[0]
https://docs.openshift.com/container-platform/3.10/install_config/persistent_storage/persistent_storage_nfs.html

___

LOUIS P. SANTILLAN

Architect, OPENSHIFT & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 




On Mon, Oct 1, 2018 at 2:46 AM Gaurav Ojha  wrote:

> Hi,
>
> Just a quick question. I have a multi-master cluster, with 2 masters, 2
> compute nodes and 2 infrastructure nodes, and I want to use NFS for
> persistence. But I cant seem to understand a basic question like where do I
> mount the volume? Do I mount it inside each compute node, or the master or
> the infra node?
>
> My guess is that it cannot be the master node, and should be in both of
> the compute nodes?
>
> Regards
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Unable to manually reclaim an existing pv

2018-10-12 Thread Louis Santillan
Carlos,

To "clean up" the PV, you need to remove the "instance data" associated
with the binding with the previous PVC.  There's a handful of lines that
need to be deleted if you typed `oc edit pv/pv-x` (and then save the
object).  Using the following PV as an example, delete the `claimRef` and
`status` sections of the yaml document, then save & quit.  Run `oc get pv`
again it should show up as available.

```

# oc get pv pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
EXPORT_block: "\nEXPORT\n{\n\tExport_Id = 5;\n\tPath =
/export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9;\n\tPseudo
  = /export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9;\n\tAccess_Type
= RW;\n\tSquash
  = no_root_squash;\n\tSecType = sys;\n\tFilesystem_id =
5.5;\n\tFSAL {\n\t\tName
  = VFS;\n\t}\n}\n"
Export_Id: "5"
Project_Id: "0"
Project_block: ""
Provisioner_Id: d5abc261-5fb7-11e7-8769-0a580a800010
kubernetes.io/createdby: nfs-dynamic-provisioner
pv.kubernetes.io/provisioned-by: example.com/nfs
  creationTimestamp: 2017-07-05T07:30:36Z
  name: pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
  resourceVersion: "60641"
  selfLink: /api/v1/persistentvolumes/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
  uid: d6521c6e-6153-11e7-b249-000d3a1a72a9
spec:
  accessModes:
  - ReadWriteMany
  capacity:
storage: 1Gi
  claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: nfsdynpvc
namespace: 3z64o
resourceVersion: "60470"
uid: d63a35a5-6153-11e7-b249-000d3a1a72a9
  nfs:
path: /export/pvc-d63a35a5-6153-11e7-b249-000d3a1a72a9
server: 172.30.206.205
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-provisioner-3z64o
status:
  phase: Released

```




___

LOUIS P. SANTILLAN

Architect, OPENSHIFT & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 




On Mon, Oct 8, 2018 at 3:04 PM Carlos María Cornejo Crespo <
carlos.cornejo.cre...@gmail.com> wrote:

> Hi folks,
>
> I'm not able to manually reclaim a pv and would like to know what I'm
> doing wrong.
> My setup is openshift 3.9 with glusterFS getting installed as part of the
> openshift installation.
>
> The inventory setup creates a storage class for gluster and also makes it
> the default one.
>
> As the setup by default is reclaim policy to Delete and I want to keep the
> pv when I delete the pvc I created a new storage class as follows:
>
> # storage class
> apiVersion: storage.k8s.io/v1
> kind: StorageClass
> metadata:
>   annotations:
> storageclass.kubernetes.io/is-default-class: "false"
>   name: glusterfs-retain
> parameters:
>   resturl: http://myheketi-storage-glusterfs.domainblah.com
>   restuser: admin
>   secretName: heketi-storage-admin-secret
>   secretNamespace: glusterfs
> provisioner: kubernetes.io/glusterfs
> reclaimPolicy: Retain
>
> and if I make a deployment requesting a volume via pvc it works well and
> the pv gets bounded as expected
>
> # deployment
> - kind: DeploymentConfig
>   apiVersion: v1
>   ..
> spec:
>   spec:
>   volumeMounts:
>   - name: "jenkins-data"
> mountPath: "/var/lib/jenkins"
> volumes:
> - name: "jenkins-data"
>   persistentVolumeClaim:
> claimName: "jenkins-data"
>
> #pvc
> - kind: PersistentVolumeClaim
>   apiVersion: v1
>   metadata:
> name: "jenkins-data"
>   spec:
> accessModes:
> - ReadWriteOnce
> resources:
>   requests:
> storage: 30Gi
> storageClassName: glusterfs-retain
>
> Now if I delete the pvc and try to reclaim that pv by creating a new
> deployment that refers to it is when I get the unexpected behaviour. A new
> pvc is created but that generates a new pv with the same name and the
> original pv stays as Released and never gets Available.
>
> How do I manually make it available? According to this
>  I
> need to manually clean up the data on the associated storage asset??? How
> am I supposed to do this if the volumen has been dynamically provisioned by
> GlusterFS?? I´m pretty sure it must be much simpler than that.
>
> Any advise?
>
> Kind regards,
> Carlos M.
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Architecture High Availability

2019-02-27 Thread Louis Santillan
Sergio,

Some customers have a previously optimized etcd cluster(s).  We can have
OpenShift point at that cluster, if so desired.  That is the External Etcd
scenario.  This can reduce significant load from the Master nodes.  The
other scenario is when OpenShift installs an etcd cluster on the Master
nodes.  I rarely hear about the former scenario anymore.


As for Masters, the idea is that etcd quorum [0] (in the latter scenario
above) is optimally achieved when we have `(n+1)/2` agreement.
Essentially, what that means that we start with 3 members (2 needed to
achieve quorum) and we should increase the cluster by 2 members to continue
along that curve.  However, since etcd needs to sync it's DB across the
Masters, we great greatly increase our network traffic requirements needed
for syncing the DB between the masters for relatively little availability
gain.

[0] https://coreos.com/etcd/docs/latest/faq.html

___

LOUIS P. SANTILLAN

Architect, OPENSHIFT & DEVOPS

Red Hat Consulting,  Container and PaaS Practice

lsant...@redhat.com   M: 3236334854

TRIED. TESTED. TRUSTED. 




On Wed, Feb 27, 2019 at 4:55 AM Sérgio Cascão  wrote:

> hi Trevor, thanks for your response.
>
> in your link they talk about HA, and these  case  you have one etcd for
> each master, if the one master goes down, you have always the etcd
> available.
>
>
>> It therefore provides an HA setup where losing a control plane instance
>> or an etcd member has less impact and does not affect the cluster
>> redundancy as much as the stacked HA topology.
>
>
> But my question is more about performance, for example, i know that
> inplementation you have more latency in acess to master. if you can show me
> more vantages and advantages I would be grateful.
>
> best regards
> Sergio
>
> W. Trevor King  escreveu no dia terça, 26/02/2019 à(s)
> 22:30:
>
>> On Tue, Feb 26, 2019 at 2:26 PM Sérgio Cascão wrote:
>> > i like know what the advantages between put etcd separated from masters?
>>
>> Some more docs around these choices in [1].
>>
>> Cheers,
>> Trevor
>>
>> [1]: https://kubernetes.io/docs/setup/independent/ha-topology/
>>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users