Re: Design questions around Container logs, EFK & OCP

2018-03-12 Thread Luke Meyer
Although you can set up the fluentd instances to send logs either to the
integrated storage or an external ES, it will be tricky to do both with the
same deployment. They are deployed with a daemonset. What you can do is
copy the daemonset and configure both as you like (with different
secrets/configmaps), using node selectors and node labels to have the right
ones land on the right nodes. However that will direct *all* of the node's
logs; I don't think there's an easy way to have the container logs go to
one destination and the service logs to another, without more in-depth
configuration of fluentd. You do have complete control over its config if
you really want though by modifying the configmap.

On Thu, Mar 8, 2018 at 4:15 AM, Mohamed A. Shahat  wrote:

> Thanks Aleks for the feedback.
>
> This looks promising.
>
> We're using Enterprise OCP. Does that make a difference at that level of
> discussion ?
>
> For the External Elasticsearch instance configs you referred to , is it
> possible to co-exist both ? Some Worker nodes sending logs to the internal
> ES, and some other Worker nodes sending logs to the external one ?
>
>
> Opensource origin:
>> https://docs.openshift.org/latest/install_config/aggregate_
>> logging.html#sending-logs-to-an-external-elasticsearch-instance
>> Enterprise:
>> https://docs.openshift.com/container-platform/3.7/install_
>> config/aggregate_logging.html#sending-logs-to-an-external-
>> elasticsearch-instance
>
>
>
> Many Thanks,
> /Mo
>
>
> On 7 March 2018 at 23:27, Aleksandar Lazic 
> wrote:
>
>> Hi.
>>
>> Am 07.03.2018 um 23:47 schrieb Mohamed A. Shahat:
>> > Hi All,
>> >
>> > My first question here, so i am hoping at least for some
>> > acknowledgement !
>> >
>> > _Background_
>> >
>> >   * OCP v3.7
>> >
>> Do you use the enterprise version or the opensource one?
>> >
>> >   * Several Worker Nodes
>> >   * Few Workload types
>> >   * One Workload, let's call it WorkloadA is planned to have dedicated
>> > Worker Nodes.
>> >
>> > _Objective_
>> >
>> >   * for WorkloadA , I'd like to send/route the Container Logs to an
>> > External EFK / ELK stack other than the one that does get setup
>> > with OCP
>> >
>> > _Motivation_
>> >
>> >   * For Workload A, an ES cluster does already exist, we would like to
>> > reuse it.
>> >   * There is an impression that the ES cluster that comes with OCP
>> > might not necessarily scale if the team operating OCP does not
>> > size it well
>> >
>> > _Inquiries_
>> >
>> >  1. Has this done before ? Yes / No ? Any comments ?
>> >
>> Yes.
>> As you may know is "handle logs in a proper way" not a easy task.
>> There are some serious questions like the following.
>>
>> * How long should the logs be preserved
>> * How much logs are written
>> * How fast are the logs written
>> * What's the limit of the network
>> * What's the limit of the remote es
>> * and many many more questions
>>
>> >  1. Is there anyway with the fluentd pods or else to route specific
>> > Workload / Pods Container logs to an external ES cluster ?
>> >  2. If not, i'm willing to deploy my own fluentd pods , what do i lose
>> > by excluding the WorkloadA Worker Nodes to not have the OCP
>> > fluentd pods ? for example i don't want to lose any Operations /
>> > OCP related / Worker Nodes related logs going to the embedded ES
>> > cluster, all i need is to have the Container Logs of WorkloadA to
>> > another ES cluster.
>> >
>> Have you looked at the following doc part?
>>
>> Opensource origin:
>> https://docs.openshift.org/latest/install_config/aggregate_
>> logging.html#sending-logs-to-an-external-elasticsearch-instance
>>
>> Enterprise:
>> https://docs.openshift.com/container-platform/3.7/install_
>> config/aggregate_logging.html#sending-logs-to-an-external-
>> elasticsearch-instance
>>
>> As in the doc described you can send the collected fluentd logs to a
>> external es cluster.
>>
>> You can find the source of the openshift logging solution in this repo.
>> https://github.com/openshift/origin-aggregated-logging
>>
>> > Looking forward to hearing from you,
>> >
>> > Thanks,
>> Hth
>> Aleks
>>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: 答复: dev Digest, Vol 71, Issue 1

2018-02-07 Thread Luke Meyer
On Sun, Feb 4, 2018 at 7:51 AM, Zhang William  wrote:

> So there is no v3.8 version?
>

None was released. By the time the kubernetes 1.8 code was rolled into
master, it was time to also roll in 1.9 changes. So 3.8 exists in the git
repo but was effectively skipped to catch up with kubernetes.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
If the answer is just "go guru is dog slow, use something else in that
case" then that seems like a useful thing to note in the README :) Along
with what people actually use in development. Seems like a number of tools
rely on guru but everyone complains about how slow it is on large projects.
So that's one more vote for VS...

On Tue, Dec 5, 2017 at 10:53 AM, Dan Mace <dm...@redhat.com> wrote:

>
>
> On Tue, Dec 5, 2017 at 10:43 AM, Luke Meyer <lme...@redhat.com> wrote:
>
>> In the context of the vim-go plugin. However behavior seems much the same
>> if I run the same command at the command line (I pulled it out of ps -ef).
>>
>> On Tue, Dec 5, 2017 at 10:40 AM, Sebastian Jug <se...@redhat.com> wrote:
>>
>>> Are you using guru in some sort of editor/IDE or just standalone?
>>>
>>> On Dec 5, 2017 9:40 AM, "Luke Meyer" <lme...@redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug <se...@redhat.com> wrote:
>>>>
>>>>> Sounds like you have got auto compile still on?
>>>>>
>>>>>
>>>> What does this mean in the context of go guru? Is there an env var to
>>>> set, an option to add, a config file to change to control this behavior?
>>>>
>>>>
>>>
>>
> ​The same query:
>
> guru -scope github.com/openshift/origin/cmd/oc whicherrs
> ./pkg/oc/admin/diagnostics/diagnostics.go:#7624
>
> was taking long enough for me (go1.8.3 darwin/amd64) that I killed it.
> It's hard to say without doing a deeper profile of that guru command. Even
> with your relatively narrow pointer analysis scope ​it seems really slow,
> but then again it's hard to gauge exactly how narrow that scope is without
> looking at a full import dependency graph...
>
> Guru has always been really slow for lots of useful pointer analysis
> queries, so I'm not entirely surprised. This is why vscode-go uses a
> variety of more optimized special purpose tools for most analysis[1].
>
> [1] https://github.com/Microsoft/vscode-go/blob/
> master/src/goInstallTools.ts#L21
>
> --
>
> Dan Mace
>
> Principal Software Engineer, OpenShift
>
> Red Hat
>
> dm...@redhat.com
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
On Tue, Dec 5, 2017 at 10:51 AM, Clayton Coleman <ccole...@redhat.com>
wrote:

> Openshift and Kubernetes are massive go projects - over 3 million lines of
> code (last I checked).  Initial compile can take a few minutes for these
> tools.  Things to check:
>
> 1. Go 1.9 uses less memory when compiling
> 2. Be sure you are reusing your go compiled artifacts dir between multiple
> tools (sometimes that is GOPATH/pkg, but openshift explicitly only compiles
> temp packages into _output/local/pkgdir for reasons)
>


So if I make clean all and then run my guru command, won't that be reusing
compiled artifacts? Is there some config that controls this? I don't think
I've customized anything.

It does seem to speed up a little bit after the first run but then it's
still pretty slow.



> 3. Get faster laptop :)
>
> On Dec 5, 2017, at 9:44 AM, Luke Meyer <lme...@redhat.com> wrote:
>
> In the context of the vim-go plugin. However behavior seems much the same
> if I run the same command at the command line (I pulled it out of ps -ef).
>
> On Tue, Dec 5, 2017 at 10:40 AM, Sebastian Jug <se...@redhat.com> wrote:
>
>> Are you using guru in some sort of editor/IDE or just standalone?
>>
>> On Dec 5, 2017 9:40 AM, "Luke Meyer" <lme...@redhat.com> wrote:
>>
>>>
>>>
>>> On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug <se...@redhat.com> wrote:
>>>
>>>> Sounds like you have got auto compile still on?
>>>>
>>>>
>>> What does this mean in the context of go guru? Is there an env var to
>>> set, an option to add, a config file to change to control this behavior?
>>>
>>>
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
In the context of the vim-go plugin. However behavior seems much the same
if I run the same command at the command line (I pulled it out of ps -ef).

On Tue, Dec 5, 2017 at 10:40 AM, Sebastian Jug <se...@redhat.com> wrote:

> Are you using guru in some sort of editor/IDE or just standalone?
>
> On Dec 5, 2017 9:40 AM, "Luke Meyer" <lme...@redhat.com> wrote:
>
>>
>>
>> On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug <se...@redhat.com> wrote:
>>
>>> Sounds like you have got auto compile still on?
>>>
>>>
>> What does this mean in the context of go guru? Is there an env var to
>> set, an option to add, a config file to change to control this behavior?
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Fwd: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
-- Forwarded message --
From: Luke Meyer <lme...@redhat.com>
Date: Tue, Dec 5, 2017 at 10:39 AM
Subject: Re: [aos-devel] optimizing go guru
To: Sebastian Jug <se...@redhat.com>
Cc: dev <d...@lists-openshift-redhat-com.vserver.prod.ext.phx2.redhat.com>




On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug <se...@redhat.com> wrote:

> Sounds like you have got auto compile still on?
>
>
What does this mean in the context of go guru? Is there an env var to set,
an option to add, a config file to change to control this behavior?
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin v3.6.0 is released

2017-07-31 Thread Luke Meyer
On Mon, Jul 31, 2017 at 11:34 AM, Clayton Coleman 
wrote:

> Remember to use the Ansible release-3.6 branch for your installs.
>
>
You can also skip installing Ansible and checking out the repo and just use
the containerized install image
.
Tag v3.6.0 is built from the release-3.6 branch.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


/etc/localtime

2016-07-06 Thread Luke Meyer
Is there a simple way to find out the host's local timezone without having
to mount /etc/localtime (which is pretty painful given it requires
hostmount)? Could there be some way it's passed in as an env var or
something?
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: packaging

2016-06-29 Thread Luke Meyer
Err yeah, https://github.com/openshift/origin/blob/master/origin.spec looks
promising. For some reason I was expecting it in hack/

On Wed, Jun 29, 2016 at 12:07 PM, Clayton Coleman <ccole...@redhat.com>
wrote:

> The spec file checked in to the repo is the same one that is used to build
> those RPMs, isn't it?
>
> On Jun 29, 2016, at 8:12 AM, Luke Meyer <lme...@redhat.com> wrote:
>
> The origin project itself doesn't maintain spec files. However you might
> find the Fedora and EPEL source rpms interesting:
>
> Fedora -
> https://kojipkgs.fedoraproject.org//packages/origin/1.2.0/1.git.0.2e62fab.fc24/src/origin-1.2.0-1.git.0.2e62fab.fc24.src.rpm
> CentOS/EPEL -
> http://cbs.centos.org/kojifiles/packages/origin/1.2.0/4.el7/src/origin-1.2.0-4.el7.src.rpm
>
> On Mon, Jun 27, 2016 at 8:23 AM, Cameron Braid <came...@drivenow.com.au>
> wrote:
>
>> Hi,
>>
>> I'd like to build my own src.rpm for openshift origin (v1.3.0-alpha.2),
>> but I can't find where the relevant build/packaging scripts are.
>>
>> Cameron
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


readiness probes and clustered discovery

2016-05-19 Thread Luke Meyer
We have a plugin for Elasticsearch to cluster based on looking up endpoints
on its clustering service (which runs at separate port 9300 instead of http
port 9200). But in order to be among the endpoints on a service, the
cluster members have to be considered "up"; so this must occur before they
can even discover each other. The result is that there can't be a
meaningful readiness probe, and clients of the service get back errors
until it is really up.

We could get around this if readiness probes could be honored/ignored by
specific services, or if there were some other method of indicating a more
nuanced "readiness". If the service for port 9300 could consider the
members up once in "Running" state, but the service at port 9200 waited for
a readiness check, everything would work out well.

Is this strictly a kubernetes issue? Is there any movement in this
direction? It seems like something that many clustered services would
benefit from.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Fwd: linux file caching in containers?

2016-05-18 Thread Luke Meyer
Does anyone know if Linux file caching is compartmentalized in Docker
containers or accounted for in their memory limits?

The particular context of this question is Elasticsearch:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#_give_less_than_half_your_memory_to_lucene

"Lucene is designed to leverage the underlying OS for caching in-memory
data structures.


Lucene
segments are stored in individual files. Because segments are immutable,
these files never change. This makes them very cache friendly, and the
underlying OS will happily keep hot segments resident in memory for faster
access."

So the question is, if I want to reserve 4GB (via JVM options) for
ElasticSearch running in a container, and 4GB for file caching for Lucene
performance, do I reserve 8GB for the container, or try to ensure that the
host the container is running on has 4GB RAM free outside the container?
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: binary source in a Custom type build

2016-05-09 Thread Luke Meyer
Works perfectly, thanks.

If anyone is curious,
https://github.com/openshift/origin-apiman/blob/ee1da0249ed095cde0727e6c461cc913f8fdeb73/apiman-builder/build.sh#L20

On Thu, May 5, 2016 at 4:37 PM, Ben Parees <bpar...@redhat.com> wrote:

> i believe the content is being streamed into your stdin.  so your custom
> image would need to read stdin as a tar stream.
>
> On Thu, May 5, 2016 at 4:31 PM, Luke Meyer <lme...@redhat.com> wrote:
>
>> How in a custom builder do you retrieve binary build content (from e.g.
>> the --from-dir flag)?
>> https://docs.openshift.org/latest/dev_guide/builds.html#binary-source
>> does not seem to give any clues. SOURCE_URI comes in blank. Is there a
>> secret handshake I'm missing?
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Three-tier application deployment on OpenShift origin

2016-05-04 Thread Luke Meyer
On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs  wrote:

> Hi Olga,
>
> Some responses inline/
>
>
> Erik M Jacobs, RHCA
> Principal Technical Marketing Manager, OpenShift Enterprise
> Red Hat, Inc.
> Phone: 646.462.3745
> Email: ejac...@redhat.com
> AOL Instant Messenger: ejacobsatredhat
> Twitter: @ErikonOpen
> Freenode: thoraxe
>
> On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga 
> wrote:
>
>> Hello all,
>>
>>
>>
>> I am done with my *origin advanced installation* (thanks to your useful
>> help) which architecture is composed of *4 virtualized servers* (on the
>> same network):
>>
>> -   1  Master
>>
>> -   2 Nodes
>>
>> -   1 VM hosting Ansible
>>
>>
>>
>> My next steps are to implement/test some use cases with a *three-tier
>> App*(each App’s tier being hosted on a different VM):
>>
>> -   The * horizontal scalability*;
>>
>> -   The * load-balancing* of the Nodes : Keep the system running
>> even if one of the VMs goes down;
>>
>> -   App’s monitoring using *Origin API*: Allow the Origin API to
>> “tell” the App on which VM is hosted each tier. (I still don’t know how to
>> test that though…)
>>
>>
>>
>> There are some * notions* that are still not clear to me:
>>
>> -   From my web console, how can I know *on which Node has my App
>> been deployed*?
>>
>
> If you look in the Browse -> Pods -> select a pod, you should see the node
> where the pod is running.
>
>
>> -   How can I put *each component of my App* on a *separated Node*?
>>
>> -   How does the “*zones*” concept in origin work?
>>
>
> These two are closely related.
>
> 1) In your case it sounds like you would want a zone for each tier:
> appserver, web server, db
> 2) This would require a node with a label of, for example, zone=appserver
> 3) When you create your pod (or replication controller, or deployment
> config) you would want to specify, via a nodeselector, which zone you want
> the pod(s) to land in
>
>
This is not the concept of zones. The point of zones is to spread replicas
between different zones in order to improve HA (for instance, define a zone
per rack, thereby ensuring that taking down a rack doesn't take down your
app that's scaled across multiple zones).

This isn't what you want though. And you'd certainly never put a zone in a
nodeselector for an RC if you're trying to scale it to multiple zones.

For the purpose of separating the tiers of your app, you would still want
to use a nodeselector per DC or RC and corresponding node labels. There's
no other way to designate where you want the pods from different RCs to
land. You just don't want "zones".



> This stuff is scattered throughout the docs:
>
>
> https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
>
> https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes
>
> I hope this helps.
>
>
>>
>>
>> Content of /etc/ansible/hosts of my Ansible hosting VM:
>>
>> [masters]
>>
>> sv5305.selfdeploy.loc
>>
>> # host group for nodes, includes region info
>>
>> [nodes]
>>
>> sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone':
>> 'default'}" openshift_schedulable=false
>>
>> sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>> 'zone': 'east'}"
>>
>> sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>> 'zone': 'west'}"
>>
>>
>>
>> Thank you in advance.
>>
>>
>>
>> Regards,
>>
>>
>>
>> Olga
>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Excluding replacement pods from quota?

2016-05-02 Thread Luke Meyer
Use the Recreate deploy strategy rather than Rolling.
https://docs.openshift.org/latest/dev_guide/deployments.html#recreate-strategy

On Sat, Apr 30, 2016 at 10:24 PM, Andrew Lau  wrote:

> Hi,
>
> Is there a way to have the old pod moved into the terminating scope? Or is
> there an alternative solution for the following use case:
>
> User has the following quota:
> 1 pod in terminating scope
> 1 pod in non-terminating scope
>
> For new builds, the build will complete in the terminating scope but the
> replacement pod will not be able to start due to the quota.
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev