Re: Openshift and Power loss

2018-07-02 Thread Erik Jacobs
Did you perhaps not have persistent storage for your registry, and that's
the host that lost power?

This would cause etcd to think something is a certain way, but the registry
wouldn't actually have that image

?

---

ERIK JACOBS

PRINCIPAL TECHNICAL MARKETING MANAGER, CLOUD PLATFORMS

Red Hat Inc <https://www.redhat.com/>

ejac...@redhat.comM: 646.462.3745  @:
erikonopen
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>


On Wed, Jun 20, 2018 at 9:42 AM Hetz Ben Hamo  wrote:

> Looking directly at one of the nodes, I see it as this:
>
> docker-registry.default.svc:5000/image-uploader/app-cli 
>   7234ee8e02152 days ago  647 MB
>
> However, trying to pull it, I get:
>
> # docker pull docker-registry.default.svc:5000/image-uploader/app-cli
> Using default tag: latest
> Trying to pull repository
> docker-registry.default.svc:5000/image-uploader/app-cli ...
> Pulling repository docker-registry.default.svc:5000/image-uploader/app-cli
> Error: image image-uploader/app-cli:latest not found
>
>
> On Wed, Jun 20, 2018 at 4:21 PM, Clayton Coleman 
> wrote:
>
>>
>>
>> On Jun 20, 2018, at 7:32 AM, Hetz Ben Hamo  wrote:
>>
>> With all redundant hardware these days, with UPS - power loss can happen.
>>
>> I just got a power loss and upon powering on those machines, most of the
>> services weren't working. Checking the pods shows almost all of them in
>> error state. I deleted the pods and they were automatically recreated so
>> most of the services were running, but the images inside this Openshift
>> system wen't dead (redeployed my stuff gave an error that images cannot be
>> pulled).
>>
>>
>> If you get “images cannot be pulled” it usually means those images don’t
>> exist anymore.  If you try to docker pull those images, what happens?
>>
>>
>> I looked at the documents, both the commercial and the origin version,
>> but there is nothing which talks about this issue, nor any script that will
>> fix this issue after powering on this system.
>>
>> Is there such a document or any script that fixes such an issue?
>>
>> Thanks,
>> Hetz
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: query on persistent volumes

2017-11-16 Thread Erik Jacobs
On Thu, Sep 7, 2017 at 1:53 AM, Pri <priyanka4opensh...@gmail.com> wrote:

> Hi Erik,
>
> Apologies for late response. I would like to know what happens to gluster
> storage in both cases.
>
> 1) scale up or down the gluster pods
>

Gluster is deployed using a daemonset, so you can't just "scale up" the
pods. You would need to add nodes to the cluster and then give them the
appropriate labels. Then you would get more gluster pods. But they are not
yet part of the storage pool, so you would have to perform heketi-cli /
topology commands/changes to get them in. It's not as trivial as scale++
and magic happens, yet.

2) scale up or down the app pods which are using gluster as persistent
> volume.
>

It depends on the PV/PVC definition. If you have an RWX PV/PVC then scaling
up would result in the new instance getting attached to the same volume,
similar to how NFS behaves. If the volume is RWO, then you cannot scale the
app.

I'm assuming that whatever your claim is (RWX/RWO) is what you would end up
getting if you are using dynamic provisioning.


> Thanks a lot again!
>
> Thanks,
> Pri
>
> On Wed, Aug 16, 2017 at 5:44 AM, Erik Jacobs <ejac...@redhat.com> wrote:
>
>> Hi Pri,
>>
>> Are you asking about what happens when you scale up the Gluster pods, or
>> the app pods?
>>
>> ---
>>
>> ERIK JACOBS
>>
>> PRINCIPAL TECHNICAL MARKETING MANAGER, OPENSHIFT
>>
>> Red Hat Inc <https://www.redhat.com/>
>>
>> ejac...@redhat.comM: 646.462.3745 @: erikonopen
>> <https://red.ht/sig>
>> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
>>
>> On Fri, Aug 4, 2017 at 10:59 AM, Pri <priyanka4opensh...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I am using glusterfs (container native storage) on OCP 3.5. I have one
>>> doubt, what happens to the storage when we scale up the pods (replicas=5) ,
>>> will all the pods persist data on same storage?
>>>
>>> Would be great if someone can help me understand this.
>>>
>>> Thanks in advance.
>>> Pri
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: questions about externalIP usage

2017-08-15 Thread Erik Jacobs
Hi Jared,

Did you previously configure the cluster for externalip usage?

https://docs.openshift.org/latest/admin_guide/tcp_ingress_external_ports.html

---

ERIK JACOBS

PRINCIPAL TECHNICAL MARKETING MANAGER, OPENSHIFT

Red Hat Inc <https://www.redhat.com/>

ejac...@redhat.comM: 646.462.3745 @: erikonopen
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Thu, Aug 10, 2017 at 4:12 AM, Yu Wei <yu20...@hotmail.com> wrote:

> Hi guys,
>
> I deployed redis with replication controller successfully on openshift
> origin cluster.
>
> Then I tried to create service for external clients to connect.
>
> However, it seemed that it didn't work.
>
> How could I debug similar problem? Is there any guidance about using
> externalIP in openshift?
>
>
> The detailed information is as below,
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[root@host-10-1-236-92 gluster]# oc get svc NAME
> CLUSTER-IP EXTERNAL-IP   PORT(S)
> AGE glusterfs-cluster   172.30.6.143
> 1/TCP1d
> redis-svc   172.30.51.20   10.1.236.92,10.1.236.93,10.1.241.55
> 26379/TCP,6379/TCP   24m [root@host-10-1-236-92 gluster]# oc describe svc
> redis-svc Name:redis-svc Namespace:
>  openshiift-servicebroker Labels: Selector:
>  sb-2017-redis-master=master Type:ClusterIP IP:
>  172.30.51.20 Port:redis-sen26379/TCP Endpoints:
>  172.30.41.5:26379 <http://172.30.41.5:26379> Port:
>  redis-master6379/TCP Endpoints:172.30.41.5:6379
> <http://172.30.41.5:6379> Session Affinity:None No events.*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[root@host-10-1-236-92 gluster]# cat redis-master-svc.yaml --- kind:
> Service apiVersion: v1 metadata:   name: redis-svc spec: selector:
>   sb-2017-redis-master: master ports:   - name: redis-sen
> protocol: TCP port: 26379 targetPort: 26379   -
> name: redis-master protocol: TCP port: 6379
> targetPort: 6379 externalIPs:   -  10.1.236.92   -  10.1.236.93
>   -  10.1.241.55*
>
>
> Thanks,
>
> Jared, (韦煜)
> Software developer
> Interested in open source software, big data, Linux
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: query on persistent volumes

2017-08-15 Thread Erik Jacobs
Hi Pri,

Are you asking about what happens when you scale up the Gluster pods, or
the app pods?

---

ERIK JACOBS

PRINCIPAL TECHNICAL MARKETING MANAGER, OPENSHIFT

Red Hat Inc <https://www.redhat.com/>

ejac...@redhat.comM: 646.462.3745 @: erikonopen
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Fri, Aug 4, 2017 at 10:59 AM, Pri <priyanka4opensh...@gmail.com> wrote:

> Hi,
>
> I am using glusterfs (container native storage) on OCP 3.5. I have one
> doubt, what happens to the storage when we scale up the pods (replicas=5) ,
> will all the pods persist data on same storage?
>
> Would be great if someone can help me understand this.
>
> Thanks in advance.
> Pri
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin Active Directory Authentication

2017-07-27 Thread Erik Jacobs
Hi Mark,

Is there any possibility that you could look at the LDAP/AD server to see
what OpenShift is trying to bind with?

That might give you an idea about what is being sent across, and/or why it
isn't working.

---

ERIK JACOBS

PRINCIPAL TECHNICAL MARKETING MANAGER, OPENSHIFT

Red Hat Inc <https://www.redhat.com/>

ejac...@redhat.comM: 646.462.3745 @: erikonopen
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Thu, Jul 13, 2017 at 12:00 AM, Werner, Mark <mark.wer...@unisys.com>
wrote:

> I think actually for me it would be journalclt –-u origin-master.service.
>
>
>
> Still that is a lot of log to parse through and I really don’t see
> anything regarding logon or authentication. I do see the error messages for
> when the master service was not starting but I have been past that for a
> while.
>
>
>
> Also, my understanding was that since this was installed with Ansible I
> could just go to /etc/sysconfig/origin-master and modify the line
> OPTIONS=--loglevel=2. Which I did, to OPTIONS=--loglevel=5. Then restarted
> origin-master service. Then tried a logon, but haven’t come across anything
> in the logs that tells me anything.
>
>
>
> *Mark Werner* | Senior Systems Engineer | Cloud & Infrastructure Services
>
> Unisys | Mobile Phone 586.214.9017 <(586)%20214-9017> |
> mark.wer...@unisys.com
>
> 11720 Plaza America Drive, Reston, VA 20190
>
>
>
> [image: unisys_logo] <http://www.unisys.com/>
>
>
>
> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
> MATERIAL and is for use only by the intended recipient. If you received
> this in error, please contact the sender and delete the e-mail and its
> attachments from all devices.
>
> [image: Grey_LI] <http://www.linkedin.com/company/unisys>  [image:
> Grey_TW] <http://twitter.com/unisyscorp> [image: Grey_GP]
> <https://plus.google.com/+UnisysCorp/posts>[image: Grey_YT]
> <http://www.youtube.com/theunisyschannel>[image: Grey_FB]
> <http://www.facebook.com/unisyscorp>[image: Grey_Vimeo]
> <https://vimeo.com/unisys>[image: Grey_UB] <http://blogs.unisys.com/>
>
>
>
> *From:* Steve Kuznetsov [mailto:skuzn...@redhat.com]
> *Sent:* Wednesday, July 12, 2017 11:44 PM
> *To:* Werner, Mark <mark.wer...@unisys.com>
> *Cc:* dev <dev@lists.openshift.redhat.com>; Jordan Liggitt <
> jligg...@redhat.com>
> *Subject:* RE: OpenShift Origin Active Directory Authentication
>
>
>
> You could look at master logs:
>
>
>
> journalctl --unit atomic-openshift-master.service
>
>
>
> But I think Jordan was looking for client logs, so:
>
>
>
> oc login ... --loglevel 4
>
>
>
> On Jul 12, 2017 8:38 PM, "Werner, Mark" <mark.wer...@unisys.com> wrote:
>
> Jordan,
>
>
>
> Do you happen to know what journalctl command to use to view logs related
> to logons?
>
>
>
> Thanks,
>
>
>
> *Mark Werner* | Senior Systems Engineer | Cloud & Infrastructure Services
>
> Unisys | Mobile Phone 586.214.9017 <(586)%20214-9017> |
> mark.wer...@unisys.com
>
> 11720 Plaza America Drive, Reston, VA 20190
>
>
>
> [image: unisys_logo] <http://www.unisys.com/>
>
>
>
> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
> MATERIAL and is for use only by the intended recipient. If you received
> this in error, please contact the sender and delete the e-mail and its
> attachments from all devices.
>
> [image: Grey_LI] <http://www.linkedin.com/company/unisys>  [image:
> Grey_TW] <http://twitter.com/unisyscorp> [image: Grey_GP]
> <https://plus.google.com/+UnisysCorp/posts>[image: Grey_YT]
> <http://www.youtube.com/theunisyschannel>[image: Grey_FB]
> <http://www.facebook.com/unisyscorp>[image: Grey_Vimeo]
> <https://vimeo.com/unisys>[image: Grey_UB] <http://blogs.unisys.com/>
>
>
>
> *From:* Jordan Liggitt [mailto:jligg...@redhat.com]
> *Sent:* Wednesday, July 12, 2017 11:15 PM
> *To:* Werner, Mark <mark.wer...@unisys.com>
> *Cc:* Derek Wright <derekmwri...@gmail.com>;
> dev@lists.openshift.redhat.com
> *Subject:* Re: OpenShift Origin Active Directory Authentication
>
>
>
> Bump up the log level on the apiserver to 4 (--loglevel=4) and capture the
> log messages during a login attempt
>
>
>
> On Wed, Jul 12, 2017 at 11:05 PM, Werner, Mark <mark.wer...@unisys.com>
> wrote:
>
> Thank you. That is what I was kind of assuming. And there is my problem. I
> cannot get a successful logon with an AD user. I am out of ideas. It is
> easy enough to delete old identi

Re: when attempting to login to the Open shift GUI webpage I get "invalid login or password please try again" how can I see this in a log file ?

2017-06-08 Thread Erik Jacobs
Hi Brian,

It could also be valuable to examine the LDAP server logs to see the
requests that are coming in. If you had misconfigured your ldap search
string or other LDAP server information it might look like the user login
is invalid but really your access to the LDAP server is what's invalid...

I hope you figure it out! Let us know if you still have issues.

---

ERIK JACOBS

PRINCIPAL TECHNICAL MARKETING MANAGER, OPENSHIFT

Red Hat Inc <https://www.redhat.com/>

ejac...@redhat.comM: 646.462.3745 @: erikonopen
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Tue, May 23, 2017 at 7:29 PM, Jonathan Yu <jaw...@redhat.com> wrote:

> Hi Brian,
>
> I suggest increasing the log level: https://docs.openshift.com/
> container-platform/3.5/install_config/master_node_
> configuration.html#master-node-config-logging-levels
>
> You can also try the ldapsearch command-line tool, I've found that pretty
> helpful in the past...
>
> On Tue, May 23, 2017 at 1:59 PM, Brian Keyes <bke...@vizuri.com> wrote:
>
>> I am having issues after configuring my openshift to use my LDAP ( AD) to
>> authenticate
>>
>> I attempt to login but get the error ""invalid login or password please
>> try again" what log would give me more info on what is wrong ?
>>
>>
>> --
>> Brian Keyes
>> Systems Engineer, Vizuri
>> 703-855-9074 <%28703%29%20855-9074>(Mobile)
>> 703-464-7030 x8239 <%28703%29%20464-7030> (Office)
>>
>> FOR OFFICIAL USE ONLY: This email and any attachments may contain
>> information that is privacy and business sensitive.  Inappropriate or
>> unauthorized disclosure of business and privacy sensitive information may
>> result in civil and/or criminal penalties as detailed in as amended Privacy
>> Act of 1974 and DoD 5400.11-R.
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
>
> --
> Jonathan Yu / Software Engineer, OpenShift by Red Hat / @jawnsy
> <https://twitter.com/jawnsy>
>
> *“There are a million ways to get rich. But there’s only one way to stay
> rich: Humility, often to the point of paranoia. The irony is that few
> things squash humility like getting rich in the first place.”* — Morgan
> Housel, Getting Rich vs. Staying Rich
> <http://www.collaborativefund.com/blog/getting-rich-vs-staying-rich/>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift dedicated/container platform multi-tenancy

2017-06-06 Thread Erik Jacobs
Hi Pri,

Red Hat software evaluations are not time bombed - they do not cease
working after the evaluation period. However, you would not have access to
any of the rest of the Red Hat value proposition after expiration -
support, updates, knowledge base access, and do on and so forth.

Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Jun 6, 2017 07:28, "Pri"  wrote:

Hi Steve,

One more query here , if we try OCP trial version, will my installation
stop working after trial period is over?? I have one OCP tinstallation done
few months back with trial version , so was curious to know this.

Thanks,
Priy

On Tue, Jun 6, 2017 at 11:11 AM, Pri  wrote:

> Thanks Steve, it is really helpful. We are trying to get in touch with
> RedHat representative and will be soon able to ask queries. Thanks
>
> Thanks,
> Priy
>
> On Mon, Jun 5, 2017 at 11:34 PM, Steve Speicher 
> wrote:
>
>> On Wed, May 31, 2017 at 8:56 AM, Pri 
>> wrote:
>>
>>> Hi,
>>>
>>> I believe openshift online(next gen) is multi-tenant but not enterprise
>>> ready.
>>>
>>
>>> OpenShift dedicated and Container platform I believe are enterprise
>>> ready (please correct if this is wrong) but all document says both are
>>> single-tenant. Could you please help explaining how multi-tenancy is
>>> achieved for these??
>>>
>> Hi!
>>
>> From Red Hat's perspective, Dedicated is a single tenant (customer). That
>> customer can have many users/tenants on their cluster. This is the same for
>> how someone runs OpenShift Container Platform (OCP) locally. Though with
>> OCP, you can have fully control over the scheduler. So you could isolate
>> users on different nodes.
>>
>>
>>> I understand that there could be project level separation but we want to
>>> keep almost all data separate for each tenant as well as a separate docker
>>> registry. So that it would be easy of identify resource usage for each.
>>>
>>> I have used container platform before but not sure about OpenShift
>>> dedicated, how user management is done, does Redhat provides only single
>>> user ?
>>>
>> No, Red Hat connected to your identify and auth provider. More
>> information is at: https://www.openshift.com/dedicated/
>>
>>
>>>
>>> Also I would like to understand how next gen is multi-tenant , does each
>>> customer gets separate OpenShift cluster or its just the different user?
>>>
>> OpenShift Online Next Generation is a single cluster shared by many
>> users. Each user gets a certain amount of quota to run their applications.
>> That quota can be applied to one or many of the nodes in the cluster. The
>> cluster determines best place to place it.
>>
>>
>>>
>>>
>>> Looking for clarification on these points, also would be very helpful if
>>> you could share some  available docs explaining the same.
>>>
>>
>> You can also look at : https://docs.openshift.com/con
>> tainer-platform/3.5/security/hosts_multitenancy.html
>>
>> I'd recommend reaching out to a Red Hat representative to talk through
>> some of these product specific questions. This list is usually more focused
>> towards developer topics with origin, so I want to make sure you get the
>> right help you need.
>>
>> You can always reach out to me directly if you are not sure who or where
>> to go.
>>
>> Regards,
>> Steve Speicher
>>
>>
>>>
>>>
>>> Thanks,
>>> Priy
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: high tcp retransmissions between pods on same openshift node

2017-05-17 Thread Erik Jacobs
We did make changes to the version of openvswitch that we were using. What
is the underlying infrastructure?

---

ERIK JACOBS

PRINCIPAL TECHNICAL MARKETING MANAGER, OPENSHIFT

Red Hat Inc <https://www.redhat.com/>

ejac...@redhat.comM: 646.462.3745 @: erikonopen
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Tue, Apr 18, 2017 at 5:06 AM, Roosemeyers Erik <erik.roosemey...@arxus.eu
> wrote:

> Hi Erik,
>
>
>
> Thx for picking up my questions…
>
>
>
> We’re using OCP v3.1 and the openshift-sdn.
>
>
>
> Both IP and DNS name of containers are used in the tests.
>
>
>
> If it can help, I can include more detailed test results in various
> scenarios.
>
>
>
> Do you think a newer version of ocp will improve on this ? thx !
>
>
>
> Kind regards,
>
> Erik
>
>
>
> *From:* Erik Jacobs [mailto:ejac...@redhat.com]
> *Sent:* maandag 10 april 2017 22:45
> *To:* Roosemeyers Erik <erik.roosemey...@arxus.eu>
> *Cc:* dev@lists.openshift.redhat.com
> *Subject:* Re: high tcp retransmissions between pods on same openshift
> node
>
>
>
> Hi Erik,
>
>
>
> Are you using openshift-sdn or multitenant-sdn or something else? Are you
> running OpenShift Container Platform 3.1?
>
>
>
> Are you just communicating with the pod IP directly?
>
>
> ---
>
> *ERIK** JACOBS*
>
> PRINCIPAL TECHNICAL MARKETING MANAGER, OPENSHIFT
>
> Red Hat Inc <https://www.redhat.com/>
>
> ejac...@redhat.comM: 646.462.3745 @: erikonopen
>
> <https://red.ht/sig>
>
> *TRIED. TESTED. TRUSTED.* <https://redhat.com/trusted>
>
>
>
> On Tue, Mar 28, 2017 at 4:27 AM, Roosemeyers Erik <
> erik.roosemey...@arxus.eu> wrote:
>
> Hi all,
>
>
>
> I'm testing network connectivity (speed and reliability) on a OpenShift
> cluster 3.1.
>
>
>
> I do this with 2 pods, each based on iperf3 Docker image.
>
>
>
> First pod starts iperf3 as server, other one as iperf3 client with: iperf3
> -c iperf3 -t 120 (iperf3 name of server pod).
>
>
>
> What I notice is high throughput but also big retransmission rates:
>
> eg.
>
> duration  Transfer  Bandwidth
> Retransmissions (TCP)
>
> 0.00-120.00 sec 214 GBytes 15.3 Gbits/sec   170478
>
>
>
> If the pods are on 2 different nodes, the retransmission rate is smaller
> (as is throughput):
>
>
>
> duration  Transfer  Bandwidth
> Retransmissions (TCP)
>
> 0.00-120.00 sec 22.6 GBytes   1.62 Gbits/sec   3627
>
>
>
> What I was wondering - what causes this higher retransmission rates ?
> (Mostly, it's a sign the are network problems but is it ?)
>
>
>
> (I did some other tests, like deploying the containers via the Docker
> daemon onto 1 server. And even on a local laptop with only Docker daemon
> installed - and I always saw these tcp retransmission rates...)
>
>
>
>
>
> Thanks in advance !
>
>
>
> kind regards,
>
> Erik
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: service groupings data

2017-04-14 Thread Erik Jacobs
Hi Brandon,

They are an annotation on the service. Note that which service you annotate
affects which service will be used to create a route. Here is an example:

https://github.com/openshift-roadshow/mlbparks/blob/master/ose3/application-template-eap.json#L584-L586

If you group foo-service *TO* bar-service, and then click "create route" in
the UI, you will be exposing foo-service. I hope that makes sense!

---

ERIK JACOBS

PRINCIPAL TECHNICAL MARKETING MANAGER, OPENSHIFT

Red Hat Inc <https://www.redhat.com/>

ejac...@redhat.comM: 646.462.3745 @: erikonopen
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On Tue, Apr 4, 2017 at 4:22 PM, Brandon Richins <brandon.rich...@imail.org>
wrote:

> When you group services in the OpenShift UI, where are those grouping
> stored?  I tried viewing the service and deployment config yaml files but
> didn’t see the grouping anywhere.  I’d love to run an `oc` command to list
> or set the groupings.
>
>
>
> *Brandon Richins*
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: JVM console access in openshift 3.3

2017-03-31 Thread Erik Jacobs
What does the EAP template specially do that makes the jolokia agent
"work"? Anything?

Or, more specifically, what is required of a pod/dc in order for the
console link to appear? I feel like this has been discussed on the list
before, maybe.

I would assume that any Java application with the Jolokia agent running
inside should also work this way. Do we have a specific support restriction
on the java console functionality?


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Fri, Mar 24, 2017 at 8:38 AM, Josef Karasek <jkara...@redhat.com> wrote:

> Pri,
>
> make sure you're deploying eap using a template [1]. That way you have a
> guarantee that the configuration works as intended.
>
> Once the eap pod is ready, you can access jolokia through the UI:
> [image: Inline image 1]
> Should you never expose jolokia outside of openshift - this would create
> severe security risk by making jmx accesible
> from the outside world.
>
> [1] https://github.com/jboss-openshift/application-templates
>
> On Fri, Mar 24, 2017 at 7:21 AM, Pri <priyanka4opensh...@gmail.com> wrote:
>
>> Hi Erik,
>> Hi Jochen,
>>
>> I have EAP running on OCP and the sample app  from "
>> https://github.com/jboss-developer/jboss-eap-quickstarts.git
>> <https://github.com/jboss-developer/jboss-eap-quickstarts/tree/6.4.x/kitchensink>
>> " has jolokia agent running. I can see that in logs "Jolokia: Agent
>> started with URL https://10.120.1.158:8778/jolokia/
>> <https://10.128.1.154:8778/jolokia/>"
>>
>> But how to access the JVM console in browser? there is no link in the
>> description page of pod. Could you please help on this?
>>
>> Thanks,
>> Priy
>>
>> On Tue, Mar 14, 2017 at 6:49 PM, Pri <priyanka4opensh...@gmail.com>
>> wrote:
>>
>>> Hi Jochen,
>>>
>>> Thanks for the response and apologies for delayed response from my side.
>>> How to setup jolokia  agent with in an app? could you please provide some
>>> details or documents if any?
>>>
>>> Thanks,
>>> Priy
>>>
>>> On Sat, Mar 11, 2017 at 2:14 AM, Jochen Cordes <jcor...@redhat.com>
>>> wrote:
>>>
>>>> In addition to have the Jolokia agent deployed with the app, you also
>>>> need a port named jolokia exposed (port number seems to be irrelavant)
>>>>
>>>> On Thu, Mar 9, 2017 at 5:25 PM, Erik Jacobs <ejac...@redhat.com> wrote:
>>>>
>>>>> Hi Priyanka,
>>>>>
>>>>> This is designed, IIRC, to work with the Jolokia agent that runs in
>>>>> our Wildfly/EAP xPaaS images.
>>>>>
>>>>> Does your Java container have Jolokia running on the default port and
>>>>> exposed in the service?
>>>>>
>>>>> I'll see if I can't scrounge up more docs on how that's supposed to
>>>>> work, but, at a minimum, I think Jolokia is a requirement.
>>>>>
>>>>>
>>>>> Erik M Jacobs, RHCA
>>>>> Principal Technical Marketing Manager, OpenShift
>>>>> Red Hat, Inc.
>>>>> Phone: 646.462.3745 <(646)%20462-3745>
>>>>> Email: ejac...@redhat.com
>>>>> AOL Instant Messenger: ejacobsatredhat
>>>>> Twitter: @ErikonOpen
>>>>> Freenode: thoraxe
>>>>>
>>>>> On Tue, Mar 7, 2017 at 12:25 PM, Pri <priyanka4opensh...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> According to this document https://docs.openshift.com/con
>>>>>> tainer-platform/3.3/architecture/infrastructure_components/w
>>>>>> eb_console.html#jvm-console
>>>>>>
>>>>>> Openshift has built in JVM console for java application, but I can
>>>>>> not see that in my installation,
>>>>>>
>>>>>> Can anyone please help? How to access JVM console, Is there any extra
>>>>>> configuration required for this?
>>>>>>
>>>>>> Thanks,
>>>>>> Priyanka
>>>>>>
>>>>>> ___
>>>>>> dev mailing list
>>>>>> dev@lists.openshift.redhat.com
>>>>>> http://lists.o

Re: JVM console access in openshift 3.3

2017-03-09 Thread Erik Jacobs
Hi Priyanka,

This is designed, IIRC, to work with the Jolokia agent that runs in our
Wildfly/EAP xPaaS images.

Does your Java container have Jolokia running on the default port and
exposed in the service?

I'll see if I can't scrounge up more docs on how that's supposed to work,
but, at a minimum, I think Jolokia is a requirement.


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Tue, Mar 7, 2017 at 12:25 PM, Pri  wrote:

> Hi,
>
> According to this document https://docs.openshift.com/
> container-platform/3.3/architecture/infrastructure_
> components/web_console.html#jvm-console
>
> Openshift has built in JVM console for java application, but I can not see
> that in my installation,
>
> Can anyone please help? How to access JVM console, Is there any extra
> configuration required for this?
>
> Thanks,
> Priyanka
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: App is not able to talk with a third party app (installed on another infrastructure).

2017-02-03 Thread Erik Jacobs
Hi Francesco,

It sounds like there is a problem with your software defined network. How
have you installed OpenShift? Is it Origin or OCP?


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Thu, Feb 2, 2017 at 5:02 AM, Francesco D'Andria 
wrote:

> Clayton thanks for answering.
>
> we are quite sure there is something in Openshift is blocking the call.
> (we deployed a docker on the same Openshift infra but outside
> Openshift and the call is ok)
> Do you know where I can check (internally to Openshift ) to be sure
> the Openshift firewall is correctly configured?
>
> thank you again and regards
> Francesco
>
> 2017-01-30 19:00 GMT+01:00 Clayton Coleman :
> > Usually those are firewall rules blocking your access to the cluster.
> > Have you verified that each node is able to ping your other cluster?
> >
> >> On Jan 30, 2017, at 11:41 AM, Francesco D'Andria 
> wrote:
> >>
> >> Hi all,
> >>
> >> I've just installed an instance of OpenShift Origin on my
> infrastructure.
> >> I deployed an app on my space (as docker image) and I realized the app
> >> is not able to talk with a third party app (installed on another
> >> infrastructure).
> >>
> >> Could anyone please suggest me how can I overcome with this issue?
> >> Thanks in advance
> >>
> >> BR
> >>
> >> Francesco
> >>
> >> ___
> >> dev mailing list
> >> dev@lists.openshift.redhat.com
> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: service discover - always confuse

2017-02-03 Thread Erik Jacobs
Hi Srinivas,

You can look at the dnsmasq configuration on the node to see what is
happening.

Basically dnsmasq should have 2 configurations -- one for ".local", more or
less, and one for everything else.

During the installation process (if we are talking about OCP), dnsmasq is
configured to resolve anything that is **NOT** .local via the original
resolver configured for the node.
Anything that is .local should be resolved via the kubernetes service IP
address

Do you have two config files in the dnsmasq.d folder (or whatever) on your
node? If so, what do their contents look like?


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, Feb 1, 2017 at 12:50 PM, Srinivas Naga Kotaru (skotaru) <
skot...@cisco.com> wrote:

>
>
> If all containers forwarding to nodes for name resolution and nodes
> forwarding to master for DNS resolution, what master does for external
> queries for which master is not authorities? Am wonder how curl and ping is
> working from containers for dig/lookup not working unless we mention @name
> server in the query.
>
>
>
> Can someone explain how name resolution works in below scenarios
>
> -  POD to corporate resources
>
> -  POD to external resources
>
>
>
> What master nameserver does ( dnsmasq) in above occasions?
>
>
>
> --
>
> *Srinivas Kotaru*
>
>
>
> *From: *"ccole...@redhat.com" 
> *Date: *Tuesday, January 31, 2017 at 1:24 PM
> *To: *Srinivas Naga Kotaru 
> *Cc: *dev 
> *Subject: *[SUSPICIOUS] Re: service discover - always confuse
>
>
>
> Including the list correctly.
>
>
>
> On Tue, Jan 31, 2017 at 4:06 PM, Clayton Coleman 
> wrote:
>
>
>
>
> On Jan 30, 2017, at 1:51 AM, Srinivas Naga Kotaru (skotaru) <
> skot...@cisco.com> wrote:
>
> Hi
>
>
>
> Observed 2 different behaviors in my platform. not sure this is expected
> behavior or not. Can you clarify for below behaviors?
>
>
>
> 1.   Name resolution not working for external domains although ping
> and curl commands working as expected
>
>
>
> Examples:
>
>
>
> # oc rsh kong-app3-792309857-1i4xk
>
>
>
> # ping -c1 google.com
> 
>
> PING google.com
> 
> (216.58.204.110) 56(84) bytes of data.
>
> 64 bytes from par10s28-in-f14.1e100.net
> 
> (216.58.204.110): icmp_seq=1 ttl=47 time=85.7 ms
>
>
>
> #$ curl -IL google.com
> 
>
> HTTP/1.1 302 Found
>
> Cache-Control: private
>
> Location: http://secure-web.cisco.com/161IUJ29rLtqDu4qCdZKtaRCUMt8xd
> ybiQ5Y6_yw383Yl_tbsNfQlk0Vh6yvGPFyTquM8wx4hmm0
> H5gcrzQu0YnLl2iMjdjWo9DyQDq6KFY8-7PF6P3I2Fft9aw3SK32g4c7vK0h9EA
> apyOyYLY1oy99qUWkhaUcsU4m-KACvLiuGZkObHP_v20FGn114xPVUn4Yk9hQeAikGfkp_
> 7d00ZCdrpqIEmghKo1B-Lbby2ZStgpIH6uVuuZ5WWUMPi2fteW
> Hl40sauFqvLCfYesgsrLpEuYCfo7ABVo9MJhM-UiC13clZD_
> EEU9aNwAzOEOA0RMwYyBE7O8-r_QiSuXbiVQ/http%3A%2F%2Fwww.
> google.com.co%2F%3Fgfe_rd=cr=6d6OWOCgJPLU8ge42JWoCA
> 

Re: cluster health metrics or API end points

2017-02-01 Thread Erik Jacobs
Re-adding the list (bad reply-to)


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, Feb 1, 2017 at 4:26 PM, Erik Jacobs <ejac...@redhat.com> wrote:

> Hi Srinivas,
>
> I'm not sure if this is everything you're looking for, but here is what
> OpenShift operations uses (Zabbix) to monitor the health of Online and
> Dedicated clusters:
>
> https://github.com/openshift/openshift_zabbix
>
>
> Erik M Jacobs, RHCA
> Principal Technical Marketing Manager, OpenShift
> Red Hat, Inc.
> Phone: 646.462.3745 <(646)%20462-3745>
> Email: ejac...@redhat.com
> AOL Instant Messenger: ejacobsatredhat
> Twitter: @ErikonOpen
> Freenode: thoraxe
>
> On Fri, Jan 27, 2017 at 4:18 PM, Srinivas Naga Kotaru (skotaru) <
> skot...@cisco.com> wrote:
>
>> We want to measure the health of OpenShift cluster from all possible ways
>> and report status back to clients in a single simple page. I have few
>> things in mind
>>
>>
>>
>> Health of:
>>
>> · API servers
>>
>> · etcd servers
>>
>> · nodes (kubectl??)
>>
>> · SDN
>>
>> · PV’s
>>
>> · Routers shards
>>
>> · Ingress controllers
>>
>> · Docker (every node)
>>
>> · Docker storage volume (every node)
>>
>>
>>
>> Am sure we have REST API’s available to measure the health of most of the
>> above critical components. Can you shed some light on?
>>
>>
>>
>> · What are the API?
>>
>> · Any other better way to measure health of critical components?
>>
>>
>>
>> --
>>
>> *Srinivas Kotaru*
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: openshift 3.3 HA cluster

2017-01-03 Thread Erik Jacobs
This is correct.

I am guessing you also will need fully resolving forward DNS, and it looks
like you are using shortnames from a hosts file.


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Tue, Jan 3, 2017 at 3:18 AM, Akram Ben Aissi <akram.benai...@gmail.com>
wrote:

> Hi Pri,
>
> as stated initially, if you want HA, you will need at least 3 etcd servers
> which, in your case, implies 3 masters.
>
> Akram
>
>
> On 3 January 2017 at 08:10, Pri <priyanka4opensh...@gmail.com> wrote:
>
>> Hi Erik, Akram,
>>
>> I would like to hear from you on this. Would you be able to look at the
>> above inventory and let me know if that right for High availability
>> OpenShift architecture.
>>
>> Thanks a lot for help!
>>
>> Thanks,
>> Priy
>>
>> On Wed, Dec 21, 2016 at 11:47 AM, Pri <priyanka4opensh...@gmail.com>
>> wrote:
>>
>>> Hi Erik,
>>>
>>> Thanks for response. Below is my ansible inventory, Please suggests if
>>> this needs to be modified for HA
>>>
>>> # Create an OSEv3 group that contains the master, nodes, etcd, and lb
>>> groups.
>>> # The lb group lets Ansible configure HAProxy as the load balancing
>>> solution.
>>> # Comment lb out if your load balancer is pre-configured.
>>> [OSEv3:children]
>>> masters
>>> nodes
>>> etcd
>>>
>>> # Set variables common for all OSEv3 hosts
>>> [OSEv3:vars]
>>> ansible_ssh_user=root
>>> deployment_type=openshift-enterprise
>>> openshift_pkg_version=-3.3.1.5
>>> openshift_master_console_port=443
>>> openshift_master_api_port=443
>>> openshift_image_tag=v3.3.1.5
>>> # Uncomment the following to enable htpasswd authentication; defaults to
>>> # DenyAllPasswordIdentityProvider.
>>> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login':
>>> 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider',
>>> 'filename': '/etc/origin/master/htpasswd'}]
>>>
>>> # Native high availbility cluster method with optional load balancer.
>>> # If no lb group is defined installer assumes that a load balancer has
>>> # been preconfigured. For installation the value of
>>> # openshift_master_cluster_hostname must resolve to the load balancer
>>> # or to one or all of the masters defined in the inventory if no load
>>> # balancer is present.
>>> openshift_master_cluster_method=native
>>> openshift_master_cluster_hostname=elbhostname
>>> openshift_master_cluster_public_hostname=elbhostname
>>> openshift_registry_selector='region=infra'
>>> openshift_hosted_router_selector='region=infra'
>>>
>>> # override the default controller lease ttl
>>> #osm_controller_lease_ttl=30
>>>
>>> # host group for masters
>>> [masters]
>>> masterhost1
>>> masterhost2
>>>
>>> # host group for etcd
>>> [etcd]
>>> masterhost1
>>> masterhost2
>>>
>>>
>>> # host group for nodes, includes region info
>>> [nodes]
>>> infranodehost openshift_node_labels="{'region': 'infra', 'zone':
>>> 'default'}" openshift_schedulable=true
>>> masterhost1 openshift_node_labels="{'region': 'master1', 'zone':
>>> 'default'}" openshift_schedulable=true
>>> masterhost2 openshift_node_labels="{'region': 'master2', 'zone':
>>> 'default'}" openshift_schedulable=true
>>>
>>> Thanks,
>>> Priya
>>>
>>> On Tue, Dec 20, 2016 at 3:23 AM, Erik Jacobs <ejac...@redhat.com> wrote:
>>>
>>>> On Thu, Dec 15, 2016 at 2:25 AM, Pri <priyanka4opensh...@gmail.com>
>>>> wrote:
>>>>
>>>>> Thanks Igor and Akram, I was able to configure with TCP on ELB. For HA
>>>>> what if a region has only two availability zones?  can we configure 2
>>>>> masters in one and 1 master in other AZ.
>>>>>
>>>>> I am not running etcd externally as of now, its embedded in master
>>>>> hosts itself. Is this the right architecture?
>>>>>
>>>>
>>>> How do you have your Ansible inventory configured? What's your Ansible
>>>> hosts file look like?
>>>>
>>>>
>>>>> Also I have one more query, how 

Re: openshift 3.3 HA cluster

2016-12-19 Thread Erik Jacobs
On Thu, Dec 15, 2016 at 2:25 AM, Pri  wrote:

> Thanks Igor and Akram, I was able to configure with TCP on ELB. For HA
> what if a region has only two availability zones?  can we configure 2
> masters in one and 1 master in other AZ.
>
> I am not running etcd externally as of now, its embedded in master hosts
> itself. Is this the right architecture?
>

How do you have your Ansible inventory configured? What's your Ansible
hosts file look like?


> Also I have one more query, how to restart master if I make any change in
> master-config.yaml. "systemctl restart atomic-openshift-master" doesn't
> seem to work.
>

If you have multiple masters you need to:

* change it on all masters
* restart atomic-openshift-master-controllers and -api -- the -master
service doesn't run/do anything in an HA/multi-master cluster.

>
> Thanks,
> Priya
>
>
> On Thu, Dec 15, 2016 at 3:13 AM, Akram Ben Aissi  > wrote:
>
>> on more point: You need 3 masters for HA, unless you are running etcd
>> externally.
>>
>>
>> On 14 December 2016 at 18:25, Igor Katson  wrote:
>>
>>> Hi, Pri, here's how the setup works for us in prod:
>>>
>>>
>>>- the master ELB MUST be configured to do TCP balancing on port 443.
>>>Not HTTPS. You need to do TCP, because the masters do TLS termination and
>>>SNI by themselves.
>>>- the "openshift_master_cluster_hostname" variable is set to the
>>>name of the ELB. Actually, in our setup it is an extra DNS record which 
>>> is
>>>a CNAME to the ELB, so that we can change the ELB if needed. E.g.
>>>"internal.openshift.youdomain" that is a CNAME to the ELB.
>>>- the "openshift_master_cluster_public_hostname" is set to the
>>>publicly-visible DNS name, that also points to this ELB. E.g.
>>>"openshift.yourdomain", where you can get valid SSL certs issued.
>>>
>>>  In case you have a public SSL cert, you may put smth like this into
>>> inventory (make sure it's a valid json string):
>>>   "openshift_master_named_certificates": [
>>> {
>>>   "certfile": "your-cert-file-on-ansible-machine",  // this may
>>> include intermediate certs bundled
>>>   "keyfile": "your-key-file-on-ansible-machine"
>>> }
>>>   ],
>>>
>>> On Wed, Dec 14, 2016 at 7:07 AM, Pri 
>>> wrote:
>>>
 Hi,

 I am setting openshift HA cluster with 2 masters and 2 nodes on AWS. I
 want my masters to be backed by Elastic load balancer. But it doesnt work
 when I give "openshift_master_cluster_hostname=" as ELB
 hostname in ansible. So I tried giving one of the masters hostnames here
 which worked fine. After that I configured ELB on AWS and added 2 master
 instances. Now the problem is whenever I access openshift console using ELB
 hostname it just redirects me to master IP address which is not what we
 want, hostname on browser should always be consistent.

 Also I am not very sure which SSL certificate to configure on ELB when
 it listens of HTTPS port 443 for console access.


 Could you please help me with this?

 Thanks a lot for help

 Thanks,
 Priya

 ___
 dev mailing list
 dev@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: AWS ELB Configuration for Multi-Master

2016-12-13 Thread Erik Jacobs
Hi Isaac,

Have you configured your ELB as per the current published AWS reference
architecture for OpenShift?


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, Dec 7, 2016 at 3:15 AM, Igor Katson  wrote:

> Hi, Isaac,
>
> I have some experience in using that setup. Not sure why aren't you able
> to see the logs at all, but the setup I used successfully is the following:
> - TCP load balancing on port 443
> - ELB idle timeout maxed out, so that you don't get disconnected while
> looking at the logs or executing commands with oc (or kubectl) exec.
>
> On Tue, Dec 6, 2016 at 8:04 PM, Isaac Christoffersen <
> ichristoffer...@vizuri.com> wrote:
>
>> Is there guidance somewhere to show how to configure the ELB for a
>> Multi-Master configuration in AWS?  I am having issues with
>> websocket disconnects in the web console.  Specifically, I'm seeing server
>> interrupted messages and I'm unable to view the logs or terminal for a pod.
>>
>> I've tried both the classic ELB and the new ALB and am still getting
>> disconnects.
>>
>> Isaac
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Three-tier application deployment on OpenShift origin

2016-05-09 Thread Erik Jacobs
On Mon, May 9, 2016 at 9:02 AM, ABDALA Olga <olga.abd...@solucom.fr> wrote:

>
>
>
>
> *De :* Erik Jacobs [mailto:ejac...@redhat.com]
> *Envoyé :* lundi 9 mai 2016 14:31
>
> *À :* ABDALA Olga
> *Cc :* dev@lists.openshift.redhat.com
> *Objet :* Re: Three-tier application deployment on OpenShift origin
>
>
>
> On Mon, May 9, 2016 at 4:56 AM, ABDALA Olga <olga.abd...@solucom.fr>
> wrote:
>
> Hello Erik,
>
>
>
> Please find my comments inline
>
>
>
> *De :* Erik Jacobs [mailto:ejac...@redhat.com]
> *Envoyé :* mercredi 4 mai 2016 17:32
> *À :* ABDALA Olga
> *Cc :* dev@lists.openshift.redhat.com
> *Objet :* Re: Three-tier application deployment on OpenShift origin
>
>
>
>
>
> On Wed, May 4, 2016 at 8:30 AM, ABDALA Olga <olga.abd...@solucom.fr>
> wrote:
>
> Hello Erik,
>
>
>
> Thank you for your inputs.
>
> However, while trying to update the label for my Nodes, here is what I
> get:
>
>
>
>
>
> labels are single key/value pairs. You are trying to add an additional
> zone label without specifying --overwrite. You cannot have multiple values
> for the same key.
>
>
>
> Same thing if I try to update my pods’ labels.
>
>
>
> Changing a pod label is not what you want to do. You want to change the
> pod nodeselector.
>
> Ø  Yes I guess that is what I will have to change
>
>
>
> Yes.
>
>
>
> For the NodeSelector, where can I find the pod configuration file, for me
> to specify the Node,  please?
>
> Is it in the *master-config.yaml* file?
>
>
>
> master-config.yaml is the master configuration, not a "pod configuration".
> "pod configuration" is kind of a strange statement. You probably mean "pod
> definition".
>
> Ø  By « pod definition », do you mean the pod yaml file?
>
>
>
> That is one example, yes.
>
>
>
>
>
> We'll ignore nodeselector and master-config because while it's a thing, it
> won't do what you want. If you're interested, docs here:
> https://docs.openshift.org/latest/admin_guide/managing_projects.html#setting-the-cluster-wide-default-node-selector
> .
>
> Ø  After checking the docs, My question is : if the defaultNodeSelector
> in the master config file is set for a specific region, does that mean that
> pods will never be placed on the Nodes of that specific region?
>
>
>
> If the defaultNodeSelector is set, and you didn't somehow change it in the
> project, then the default node selector will *always* be applied, in
> addition to any pod-specific node selector. Whether that default
> nodeSelector is for "region", "zone", or any other arbitrary key/value pair
> is not relevant. The default is the default.
>
>
>
> I think you meant to ask "if the default... is set for a region... does
> that mean the pods will always be placed". Not "never". Why would the
> selector mean never? That sounds more like an anti-selector...
>
>
>
>  Always… yes, sorry, my bad
>
>
>
> What you want to change is the pod nodeselector. I linked to the docs:
>
>
>
>
> https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes
>
> Ø  Just to make sure ; by setting a value to the « nodeSelector », will
> that put my pod to the specified Node?
>
>
>
> If you set a value for the nodeSelector your pod will attempt to be
> scheduled on nodes who have labels that match.
>
>
>
> If you want to run a pod on a specific node I believe there is also a way
> to select a specific node by its hostname. It's in the docs somewhere.
>
> Ok thanks
>
>
>
> I don't know how you created your pods, so how you change/add nodeselector
> depends.
>
> Ø  Actualy, I did not really ‘create’ the pods. What I did is, after
> creating a project and adding my application to the project, 1 pod was
> automatically created. From there, I simply increased the number of pods
> (from the web console) to as many as I wanted.
>
>
>
> Yes, so you have a deployment config that causes a replication controller
> to be created that then causes a pod to be created. As per below, "new-app"
> / "add to project" are basically the same thing. One is the UI and one is
> the CLI.
>
> Oh ok I see.
>
> Ø  By the way, I wanted to set something clear in my head regarding the
> pods. Does the number of pods mean the number of the application’s
> ‘versions’?
>
> I don't understand your question. The number of pods is the number of
> pods. What do you mean by "the application's 'versions'"?
>
> What I meant by application’s versions

Re: Three-tier application deployment on OpenShift origin

2016-05-04 Thread Erik Jacobs
Hi Luke,

I'll have to disagree but only semantically.

For a small environment and without changing the scheduler config, the
concept of "zone" can be used. Yes, I would agree with you that in a real
production environment the Red Hat concept of a "zone" is as you described.

You could additionally label nodes with something like "env=appserver" and
use nodeselectors on that. This is probably a more realistic production
expectation.

For the purposes of getting Abdala's small environment going, I guess it
doesn't much "matter"...


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Wed, May 4, 2016 at 11:36 AM, Luke Meyer <lme...@redhat.com> wrote:

>
>
> On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs <ejac...@redhat.com> wrote:
>
>> Hi Olga,
>>
>> Some responses inline/
>>
>>
>> Erik M Jacobs, RHCA
>> Principal Technical Marketing Manager, OpenShift Enterprise
>> Red Hat, Inc.
>> Phone: 646.462.3745
>> Email: ejac...@redhat.com
>> AOL Instant Messenger: ejacobsatredhat
>> Twitter: @ErikonOpen
>> Freenode: thoraxe
>>
>> On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga <olga.abd...@solucom.fr>
>> wrote:
>>
>>> Hello all,
>>>
>>>
>>>
>>> I am done with my *origin advanced installation* (thanks to your useful
>>> help) which architecture is composed of *4 virtualized servers* (on the
>>> same network):
>>>
>>> -   1  Master
>>>
>>> -   2 Nodes
>>>
>>> -   1 VM hosting Ansible
>>>
>>>
>>>
>>> My next steps are to implement/test some use cases with a *three-tier
>>> App*(each App’s tier being hosted on a different VM):
>>>
>>> -   The * horizontal scalability*;
>>>
>>> -   The * load-balancing* of the Nodes : Keep the system running
>>> even if one of the VMs goes down;
>>>
>>> -   App’s monitoring using *Origin API*: Allow the Origin API to
>>> “tell” the App on which VM is hosted each tier. (I still don’t know how to
>>> test that though…)
>>>
>>>
>>>
>>> There are some * notions* that are still not clear to me:
>>>
>>> -   From my web console, how can I know *on which Node has my App
>>> been deployed*?
>>>
>>
>> If you look in the Browse -> Pods -> select a pod, you should see the
>> node where the pod is running.
>>
>>
>>> -   How can I put *each component of my App* on a *separated Node*?
>>>
>>> -   How does the “*zones*” concept in origin work?
>>>
>>
>> These two are closely related.
>>
>> 1) In your case it sounds like you would want a zone for each tier:
>> appserver, web server, db
>> 2) This would require a node with a label of, for example, zone=appserver
>> 3) When you create your pod (or replication controller, or deployment
>> config) you would want to specify, via a nodeselector, which zone you want
>> the pod(s) to land in
>>
>>
> This is not the concept of zones. The point of zones is to spread replicas
> between different zones in order to improve HA (for instance, define a zone
> per rack, thereby ensuring that taking down a rack doesn't take down your
> app that's scaled across multiple zones).
>
> This isn't what you want though. And you'd certainly never put a zone in a
> nodeselector for an RC if you're trying to scale it to multiple zones.
>
> For the purpose of separating the tiers of your app, you would still want
> to use a nodeselector per DC or RC and corresponding node labels. There's
> no other way to designate where you want the pods from different RCs to
> land. You just don't want "zones".
>
>
>
>> This stuff is scattered throughout the docs:
>>
>>
>> https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
>>
>> https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes
>>
>> I hope this helps.
>>
>>
>>>
>>>
>>> Content of /etc/ansible/hosts of my Ansible hosting VM:
>>>
>>> [masters]
>>>
>>> sv5305.selfdeploy.loc
>>>
>>> # host group for nodes, includes region info
>>>
>>> [nodes]
>>>
>>> sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone':
>>> 'default'}" openshift_schedulable=false
>>>
>>> sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>>> 'zone': 'east'}"
>>>
>>> sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>>> 'zone': 'west'}"
>>>
>>>
>>>
>>> Thank you in advance.
>>>
>>>
>>>
>>> Regards,
>>>
>>>
>>>
>>> Olga
>>>
>>>
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Runtime values in sti-php ini templates

2016-02-04 Thread Erik Jacobs
Hi Mateus,

At this time each image kind of handles this differently, as best I can
tell. For example, the JBoss EAP image will look for settings.xml in the
code repository and substitute that instead of the built-in one
(over-simplifying).

The issue is how you would do this in some kind of "generic" way. EG: how
do I inform any builder image that it should place file X from the code
repository into location Y, possibly creating a directory (eg: mkdir -p) in
the process...

Would you say the following user story is accurate?

As a user/developer with OpenShift
I want to place a (config) file in my source code repository
And through some mechanism tell the S2I process to place this file in a
specific location

Env vars would be the "values" of the config options, as opposed to the
config itself, I would think. For example, the "custom config mechanism"
might allow you to put a foo.ini file in a specific location, and that file
might contain a GETENV-type reference which would be substituted by an env
var of CONFIG_VALUE_FOO_THING=BLAH

Is that all an accurate assessment?


Erik M Jacobs, RHCA
Principal Technical Marketing Manager, OpenShift Enterprise
Red Hat, Inc.
Phone: 646.462.3745
Email: ejac...@redhat.com
AOL Instant Messenger: ejacobsatredhat
Twitter: @ErikonOpen
Freenode: thoraxe

On Thu, Feb 4, 2016 at 8:23 AM, Mateus Caruccio <
mateus.caruc...@getupcloud.com> wrote:

> Hi Erik.
>
> That may work, but it won't be able to substitute runtime env vars. Also,
> it adds an extra step for something that should be simple: custom configs.
> The point here is not my specific use case. I'm looking now for a more
> generic way to allow users to define/overwrite container config in a user
> friendly way, like a simple file placed in a predetermined place inside the
> code repository.
>
> The I see now is what could be used as a simple template engine, that adds
> little or no impact on already available docker images.
>
> Regards,
>
>
> *Mateus Caruccio*
> Master of Puppets
> +55 (51) 8298.0026
> gtalk:
>
>
> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
> This message and any attachment are solely for the intended
> recipient and may contain confidential or privileged information
> and it can not be forwarded or shared without permission.
> Thank you!
>
> On Thu, Feb 4, 2016 at 12:50 AM, Erik Jacobs <ejac...@redhat.com> wrote:
>
>> Hi Mateus,
>>
>> Maybe I'm misunderstanding the problem, but would the secrets mechanism
>> not work for this? You could have the ini file be a secret which would be
>> attached/mounted into the pod at run-time and could be in that folder as an
>> .ini file... I think?
>>
>> Ben?
>>
>>
>> Erik M Jacobs, RHCA
>> Principal Technical Marketing Manager, OpenShift Enterprise
>> Red Hat, Inc.
>> Phone: 646.462.3745
>> Email: ejac...@redhat.com
>> AOL Instant Messenger: ejacobsatredhat
>> Twitter: @ErikonOpen
>> Freenode: thoraxe
>>
>> On Mon, Feb 1, 2016 at 4:43 PM, Mateus Caruccio <
>> mateus.caruc...@getupcloud.com> wrote:
>>
>>> Hi.
>>>
>>> I need to run newrelic on a php container. Its license must be set
>>> ​from ​
>>> php.ini
>>> ​ or ​
>>> any .ini inside /etc/opt/rh/rh-php56/php.d/
>>> ​.
>>>
>>> ​
>>> The problem is it need
>>> ​s​
>>> to be set on run time, not build time because the license key is stored
>>> ​in
>>>  a
>>> n​
>>> env var.
>>>
>>> What is the best way to do that?
>>> Wouldn't be good to have some kind of template processing like [1]?
>>> Something like this:
>>>
>>> for tpl in $PHP_INI_SCAN_DIR/*.template; do
>>>envsubst < $tpl > ${tpl%.template}
>>> done
>>>
>>> There is any reason not to adopt this approach? Is it something origin
>>> would accept as a PR?
>>>
>>> [1]
>>> https://github.com/openshift/sti-php/blob/04a0900b68264642def9aaea9465a71e1075e713/5.6/s2i/bin/run#L20-L21
>>>
>>>
>>> *Mateus Caruccio*
>>> Master of Puppets
>>> +55 (51) 8298.0026
>>> gtalk:
>>>
>>>
>>> *mateus.caruc...@getupcloud.com <diogo.goe...@getupcloud.com>twitter:
>>> @MateusCaruccio <https://twitter.com/MateusCaruccio>*
>>> This message and any attachment are solely for the intended
>>> recipient and may contain confidential or privileged information
>>> and it can not be forwarded or shared without permission.
>>> Thank you!
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev