Re: Design questions around Container logs, EFK & OCP

2018-03-13 Thread Luke Meyer
You said:

for example i don't want to lose any Operations / OCP related / Worker
> Nodes related logs going to the embedded ES cluster


 I was going to say that the fluentd config doesn't have a config mechanism
to send these to a different ES cluster. Actually, it does, though, as
these all are designated as "ops" logs and there's a mechanism for defining
two separate (on-cluster) ES clusters and having fluentd send the "ops"
logs to one cluster and the regular container logs to another. You may be
able to leverage that to have the non-ops container logs going to an
external cluster.

you mentioned (all node's logs ) , container logs and service logs. Can you
> please clarify the differences ?


For our purposes, all node logs include log entries in the journal as well
as container logs that docker writes separately when the json-files log
driver is configured (which I believe is the default again -- journal was
the default for a while).

By "service logs" I meant journal logs from the systemd services, for
example the master and node units. These are all considered ops logs.

Container logs are just logs from containers, whether they're in json files
or in journal, and whether they're workload containers or "Operations / OCP
related". Most OCP infrastructure components are deployed in projects that
are considered ops logs (there's a list in fluentd config).


On Mon, Mar 12, 2018 at 7:21 PM, Mohamed A. Shahat 
wrote:

> Thanks Luke, extremely enlightening.
>
> Now, can you help list the logs that are actually forwarded by the fluentd
> pods on worker nodes ? e.g. you mentioned (all node's logs ) , container
> logs and service logs. Can you please clarify the differences ?
>
> Many thanks,
>
>
> On 12 March 2018 at 23:18, Luke Meyer  wrote:
>
>> Although you can set up the fluentd instances to send logs either to the
>> integrated storage or an external ES, it will be tricky to do both with the
>> same deployment. They are deployed with a daemonset. What you can do is
>> copy the daemonset and configure both as you like (with different
>> secrets/configmaps), using node selectors and node labels to have the right
>> ones land on the right nodes. However that will direct *all* of the node's
>> logs; I don't think there's an easy way to have the container logs go to
>> one destination and the service logs to another, without more in-depth
>> configuration of fluentd. You do have complete control over its config if
>> you really want though by modifying the configmap.
>>
>> On Thu, Mar 8, 2018 at 4:15 AM, Mohamed A. Shahat 
>> wrote:
>>
>>> Thanks Aleks for the feedback.
>>>
>>> This looks promising.
>>>
>>> We're using Enterprise OCP. Does that make a difference at that level of
>>> discussion ?
>>>
>>> For the External Elasticsearch instance configs you referred to , is it
>>> possible to co-exist both ? Some Worker nodes sending logs to the internal
>>> ES, and some other Worker nodes sending logs to the external one ?
>>>
>>>
>>> Opensource origin:
>>>> https://docs.openshift.org/latest/install_config/aggregate_l
>>>> ogging.html#sending-logs-to-an-external-elasticsearch-instance
>>>> Enterprise:
>>>> https://docs.openshift.com/container-platform/3.7/install_co
>>>> nfig/aggregate_logging.html#sending-logs-to-an-external-elas
>>>> ticsearch-instance
>>>
>>>
>>>
>>> Many Thanks,
>>> /Mo
>>>
>>>
>>> On 7 March 2018 at 23:27, Aleksandar Lazic >> > wrote:
>>>
>>>> Hi.
>>>>
>>>> Am 07.03.2018 um 23:47 schrieb Mohamed A. Shahat:
>>>> > Hi All,
>>>> >
>>>> > My first question here, so i am hoping at least for some
>>>> > acknowledgement !
>>>> >
>>>> > _Background_
>>>> >
>>>> >   * OCP v3.7
>>>> >
>>>> Do you use the enterprise version or the opensource one?
>>>> >
>>>> >   * Several Worker Nodes
>>>> >   * Few Workload types
>>>> >   * One Workload, let's call it WorkloadA is planned to have dedicated
>>>> > Worker Nodes.
>>>> >
>>>> > _Objective_
>>>> >
>>>> >   * for WorkloadA , I'd like to send/route the Container Logs to an
>>>> > External EFK / ELK stack other than the one that does get setup
>>>> > with OCP
>>>> >
>>>&g

Re: Design questions around Container logs, EFK & OCP

2018-03-12 Thread Luke Meyer
Although you can set up the fluentd instances to send logs either to the
integrated storage or an external ES, it will be tricky to do both with the
same deployment. They are deployed with a daemonset. What you can do is
copy the daemonset and configure both as you like (with different
secrets/configmaps), using node selectors and node labels to have the right
ones land on the right nodes. However that will direct *all* of the node's
logs; I don't think there's an easy way to have the container logs go to
one destination and the service logs to another, without more in-depth
configuration of fluentd. You do have complete control over its config if
you really want though by modifying the configmap.

On Thu, Mar 8, 2018 at 4:15 AM, Mohamed A. Shahat  wrote:

> Thanks Aleks for the feedback.
>
> This looks promising.
>
> We're using Enterprise OCP. Does that make a difference at that level of
> discussion ?
>
> For the External Elasticsearch instance configs you referred to , is it
> possible to co-exist both ? Some Worker nodes sending logs to the internal
> ES, and some other Worker nodes sending logs to the external one ?
>
>
> Opensource origin:
>> https://docs.openshift.org/latest/install_config/aggregate_
>> logging.html#sending-logs-to-an-external-elasticsearch-instance
>> Enterprise:
>> https://docs.openshift.com/container-platform/3.7/install_
>> config/aggregate_logging.html#sending-logs-to-an-external-
>> elasticsearch-instance
>
>
>
> Many Thanks,
> /Mo
>
>
> On 7 March 2018 at 23:27, Aleksandar Lazic 
> wrote:
>
>> Hi.
>>
>> Am 07.03.2018 um 23:47 schrieb Mohamed A. Shahat:
>> > Hi All,
>> >
>> > My first question here, so i am hoping at least for some
>> > acknowledgement !
>> >
>> > _Background_
>> >
>> >   * OCP v3.7
>> >
>> Do you use the enterprise version or the opensource one?
>> >
>> >   * Several Worker Nodes
>> >   * Few Workload types
>> >   * One Workload, let's call it WorkloadA is planned to have dedicated
>> > Worker Nodes.
>> >
>> > _Objective_
>> >
>> >   * for WorkloadA , I'd like to send/route the Container Logs to an
>> > External EFK / ELK stack other than the one that does get setup
>> > with OCP
>> >
>> > _Motivation_
>> >
>> >   * For Workload A, an ES cluster does already exist, we would like to
>> > reuse it.
>> >   * There is an impression that the ES cluster that comes with OCP
>> > might not necessarily scale if the team operating OCP does not
>> > size it well
>> >
>> > _Inquiries_
>> >
>> >  1. Has this done before ? Yes / No ? Any comments ?
>> >
>> Yes.
>> As you may know is "handle logs in a proper way" not a easy task.
>> There are some serious questions like the following.
>>
>> * How long should the logs be preserved
>> * How much logs are written
>> * How fast are the logs written
>> * What's the limit of the network
>> * What's the limit of the remote es
>> * and many many more questions
>>
>> >  1. Is there anyway with the fluentd pods or else to route specific
>> > Workload / Pods Container logs to an external ES cluster ?
>> >  2. If not, i'm willing to deploy my own fluentd pods , what do i lose
>> > by excluding the WorkloadA Worker Nodes to not have the OCP
>> > fluentd pods ? for example i don't want to lose any Operations /
>> > OCP related / Worker Nodes related logs going to the embedded ES
>> > cluster, all i need is to have the Container Logs of WorkloadA to
>> > another ES cluster.
>> >
>> Have you looked at the following doc part?
>>
>> Opensource origin:
>> https://docs.openshift.org/latest/install_config/aggregate_
>> logging.html#sending-logs-to-an-external-elasticsearch-instance
>>
>> Enterprise:
>> https://docs.openshift.com/container-platform/3.7/install_
>> config/aggregate_logging.html#sending-logs-to-an-external-
>> elasticsearch-instance
>>
>> As in the doc described you can send the collected fluentd logs to a
>> external es cluster.
>>
>> You can find the source of the openshift logging solution in this repo.
>> https://github.com/openshift/origin-aggregated-logging
>>
>> > Looking forward to hearing from you,
>> >
>> > Thanks,
>> Hth
>> Aleks
>>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: 答复: dev Digest, Vol 71, Issue 1

2018-02-07 Thread Luke Meyer
On Sun, Feb 4, 2018 at 7:51 AM, Zhang William  wrote:

> So there is no v3.8 version?
>

None was released. By the time the kubernetes 1.8 code was rolled into
master, it was time to also roll in 1.9 changes. So 3.8 exists in the git
repo but was effectively skipped to catch up with kubernetes.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: New rolling pre-release tags for Origin v3.9

2018-02-07 Thread Luke Meyer
Just to make sure I'm clear on what you're saying here... the docker images
for these components will have tags that are updated as changes are made to
master. At some point, when we consider code "released", git tags will be
created, and at that point the corresponding docker image tags should
remain static. Right?

Curious, would we do this with alpha release tags as well or will those
continue to be one-shot releases? Or maybe just go away?

On Fri, Feb 2, 2018 at 1:44 AM, Clayton Coleman  wrote:

> Due to much popular demand and a desire to use this for rolling updates of
> cluster components, we have started publishing vX.Y and vX.Y.Z tags for
> origin, the registry, and logging and metrics.
>
> So by end of day tomorrow you should see v3.9 and v3.9.0 tags for all
> OpenShift components.  These tags are for pre-release code and are updated
> on every merge to master - we will roll them the entire release, and when
> we cut for v3.9.1 we'll immediately start tagging from master to the v3.9.1
> tag.  3.10 will start immediately after the release branch is cut for 3.9.
>
> As a user, you should only switch to a rolling tag once it's been
> "released" (a git tag exists) if you want to have a fully stable experience.
>
> oc cluster up and the image behavior will not be updated until we've had
> time to assess the impact of changing, although if you run "oc cluster up
> --version=v3.9" I would hope it would work.
>
> Stay tuned for more on autoupdating of cluster components.
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
If the answer is just "go guru is dog slow, use something else in that
case" then that seems like a useful thing to note in the README :) Along
with what people actually use in development. Seems like a number of tools
rely on guru but everyone complains about how slow it is on large projects.
So that's one more vote for VS...

On Tue, Dec 5, 2017 at 10:53 AM, Dan Mace  wrote:

>
>
> On Tue, Dec 5, 2017 at 10:43 AM, Luke Meyer  wrote:
>
>> In the context of the vim-go plugin. However behavior seems much the same
>> if I run the same command at the command line (I pulled it out of ps -ef).
>>
>> On Tue, Dec 5, 2017 at 10:40 AM, Sebastian Jug  wrote:
>>
>>> Are you using guru in some sort of editor/IDE or just standalone?
>>>
>>> On Dec 5, 2017 9:40 AM, "Luke Meyer"  wrote:
>>>
>>>>
>>>>
>>>> On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug  wrote:
>>>>
>>>>> Sounds like you have got auto compile still on?
>>>>>
>>>>>
>>>> What does this mean in the context of go guru? Is there an env var to
>>>> set, an option to add, a config file to change to control this behavior?
>>>>
>>>>
>>>
>>
> ​The same query:
>
> guru -scope github.com/openshift/origin/cmd/oc whicherrs
> ./pkg/oc/admin/diagnostics/diagnostics.go:#7624
>
> was taking long enough for me (go1.8.3 darwin/amd64) that I killed it.
> It's hard to say without doing a deeper profile of that guru command. Even
> with your relatively narrow pointer analysis scope ​it seems really slow,
> but then again it's hard to gauge exactly how narrow that scope is without
> looking at a full import dependency graph...
>
> Guru has always been really slow for lots of useful pointer analysis
> queries, so I'm not entirely surprised. This is why vscode-go uses a
> variety of more optimized special purpose tools for most analysis[1].
>
> [1] https://github.com/Microsoft/vscode-go/blob/
> master/src/goInstallTools.ts#L21
>
> --
>
> Dan Mace
>
> Principal Software Engineer, OpenShift
>
> Red Hat
>
> dm...@redhat.com
>
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
On Tue, Dec 5, 2017 at 10:51 AM, Clayton Coleman 
wrote:

> Openshift and Kubernetes are massive go projects - over 3 million lines of
> code (last I checked).  Initial compile can take a few minutes for these
> tools.  Things to check:
>
> 1. Go 1.9 uses less memory when compiling
> 2. Be sure you are reusing your go compiled artifacts dir between multiple
> tools (sometimes that is GOPATH/pkg, but openshift explicitly only compiles
> temp packages into _output/local/pkgdir for reasons)
>


So if I make clean all and then run my guru command, won't that be reusing
compiled artifacts? Is there some config that controls this? I don't think
I've customized anything.

It does seem to speed up a little bit after the first run but then it's
still pretty slow.



> 3. Get faster laptop :)
>
> On Dec 5, 2017, at 9:44 AM, Luke Meyer  wrote:
>
> In the context of the vim-go plugin. However behavior seems much the same
> if I run the same command at the command line (I pulled it out of ps -ef).
>
> On Tue, Dec 5, 2017 at 10:40 AM, Sebastian Jug  wrote:
>
>> Are you using guru in some sort of editor/IDE or just standalone?
>>
>> On Dec 5, 2017 9:40 AM, "Luke Meyer"  wrote:
>>
>>>
>>>
>>> On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug  wrote:
>>>
>>>> Sounds like you have got auto compile still on?
>>>>
>>>>
>>> What does this mean in the context of go guru? Is there an env var to
>>> set, an option to add, a config file to change to control this behavior?
>>>
>>>
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
In the context of the vim-go plugin. However behavior seems much the same
if I run the same command at the command line (I pulled it out of ps -ef).

On Tue, Dec 5, 2017 at 10:40 AM, Sebastian Jug  wrote:

> Are you using guru in some sort of editor/IDE or just standalone?
>
> On Dec 5, 2017 9:40 AM, "Luke Meyer"  wrote:
>
>>
>>
>> On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug  wrote:
>>
>>> Sounds like you have got auto compile still on?
>>>
>>>
>> What does this mean in the context of go guru? Is there an env var to
>> set, an option to add, a config file to change to control this behavior?
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Fwd: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
-- Forwarded message --
From: Luke Meyer 
Date: Tue, Dec 5, 2017 at 10:39 AM
Subject: Re: [aos-devel] optimizing go guru
To: Sebastian Jug 
Cc: dev 




On Tue, Dec 5, 2017 at 9:36 AM, Sebastian Jug  wrote:

> Sounds like you have got auto compile still on?
>
>
What does this mean in the context of go guru? Is there an env var to set,
an option to add, a config file to change to control this behavior?
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [aos-devel] optimizing go guru

2017-12-05 Thread Luke Meyer
In this case I'm also running into
https://github.com/openshift/origin/issues/17588 but perhaps it's all
related.

[origin] $ git describe
v3.9.0-alpha.0-11-ga5c80373e4
[origin] $ go version
go version go1.8.5 linux/amd64
[origin] $ echo $GOPATH
/home/lmeyer/go
[origin] $ time /home/lmeyer/go/bin/guru -scope
github.com/openshift/origin/cmd/oc whicherrs /home/lmeyer/go/src/
github.com/openshift/origin/pkg/oc/admin/diagnostics/diagnostics.go:#7624

/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/mtrmac/gpgme/data.go:4:11:
fatal error: gpgme.h: No such file or directory
 // #include 
   ^
compilation terminated.
cgo failed: [go tool cgo -objdir
/tmp/github.com_openshift_origin_vendor_github.com_mtrmac_gpgme_C519117980
-- -D_FILE_OFFSET_BITS=64 -I
/tmp/github.com_openshift_origin_vendor_github.com_mtrmac_gpgme_C519117980
data.go gpgme.go]: exit status 1
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:16:16:
Context not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:66:44:
Context not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:97:20:
NewDataBytes not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:101:14:
invalid operation: m.ctx (variable of type *invalid type) has no field or
method Import
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:106:20:
cannot range over res.Imports (invalid operand)
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:122:14:
invalid operation: m.ctx (variable of type *invalid type) has no field or
method GetKey
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:126:20:
NewDataBytes not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:131:18:
NewDataWriter not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:135:11:
invalid operation: m.ctx (variable of type *invalid type) has no field or
method Sign
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:135:25:
Key not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:135:61:
SigModeNormal not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:144:21:
NewDataWriter not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:148:34:
NewDataBytes not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:152:18:
invalid operation: m.ctx (variable of type *invalid type) has no field or
method Verify
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:161:42:
ValidityNever not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:67:14:
New not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:71:27:
ProtocolOpenPGP not declared by package gpgme
/home/lmeyer/go/src/
github.com/openshift/origin/vendor/github.com/containers/image/signature/mechanism_gpgme.go:75:28:
ProtocolOpenPGP not declared by package gpgme
guru: couldn't load packages due to errors:
github.com/openshift/origin/vendor/github.com/containers/image/signature,
github.com/openshift/origin/vendor/github.com/mtrmac/gpgme

real 0m10.041s
user 1m0.303s
sys 0m7.648s



On Tue, Dec 5, 2017 at 9:36 AM, Dan Mace  wrote:

>
>
> On Tue, Dec 5, 2017 at 9:31 AM, Luke Meyer  wrote:
>
>> I must be doing something wrong. Whenever go guru is fired off against
>> the origin codebase (for example, with godef or callstack or whicherrs) it
>> takes several seconds (or more) to do anything, sucking up GB of RAM and
>> all CPUs. I imagine it must be compiling the world, which is rather large.
>> Perhaps I am using the wrong scope. For instance, when working with oc, I
>> set scope to github.com/openshift/origin/cmd/oc. Should it be something
>> else? The guru docs are not as clear as they could be on exactly what
>> impact this has.
>>
>> How do you c

optimizing go guru

2017-12-05 Thread Luke Meyer
I must be doing something wrong. Whenever go guru is fired off against the
origin codebase (for example, with godef or callstack or whicherrs) it
takes several seconds (or more) to do anything, sucking up GB of RAM and
all CPUs. I imagine it must be compiling the world, which is rather large.
Perhaps I am using the wrong scope. For instance, when working with oc, I
set scope to github.com/openshift/origin/cmd/oc. Should it be something
else? The guru docs are not as clear as they could be on exactly what
impact this has.

How do you configure your development environment to do code analysis
efficiently against origin? The defaults don't seem to work too well, so if
there are any tips, it would be nice to have them in our dev readme(s).
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: GithubGraphs

2017-08-31 Thread Luke Meyer
Yeah, neat! You know of course when you provide a visualization like this,
people will want to expand the options :) For instance it would be nice to
see the duration starting from applying the "lgtm" label (in repos where
that applies), or from the first approval response, or from the first
[merge] comment (in repos where that applies), or from the first commit, or
time of opening the PR... and be able to change the buckets accordingly
since those will imply different distributions. And how many PRs were
closed without merging, and whether their owners closed them or someone
else...

Needs a "reset options" button too.

You know, just a few things :)

On Thu, Aug 31, 2017 at 9:04 AM, Michail Kargakis 
wrote:

> Hi Martin,
>
> this is cool! Do you compare the time when a PR opened with the time
> it merged? Also are you using directly the github API or anothre
> library? Gtihub API v3 or v4?
>
> In Origin we have deployed a new CI system, which has been running for
> some time in Kubernetes, that merges PRs based on a set of labels. It
> would be nice to see Origin graphs from the past two months by taking
> into account the time the "lgtm" label was added in a PR (I would be
> surprised if we can't get that info from Github) and not the creation
> of the PR, assuming that's what you are doing.
>
> https://raw.githubusercontent.com/ocasek/GithubGraphs/
> master/Screenshot_2.png
> It's nice to see how the number of PRs older than 2 days has decreased
> in the past two months :)
>
> On Thu, Aug 31, 2017 at 2:39 PM, Martin Nečas 
> wrote:
> > Hello,
> >
> > My name is Martin Nečas and I'm high school student and Red Hat intern in
> > Brno.
> > http://martin.codingkwoon.com/
> > This is my project in which you can compare how long does it take to
> merge
> > pull requests per months.If you put there new repository, it will need be
> > confirmed by admin of the site for security reasons.
> > I would be glad if you could give me some feed back about the project.
> >
> > Screenshot: http://imgur.com/a/JyoAv
> > Source code: https://github.com/ocasek/GithubGraphs
> > Examples:
> >   1) http://martin.codingkwoon.com/openshift/origin/
> >   2) http://martin.codingkwoon.com/kubernetes/kubernetes/
> >   3) http://martin.codingkwoon.com/openshift/openshift-ansible/
> >
> >
> > With greetings,
> > Martin Nečas
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin v3.6.0 is released

2017-07-31 Thread Luke Meyer
On Mon, Jul 31, 2017 at 11:34 AM, Clayton Coleman 
wrote:

> Remember to use the Ansible release-3.6 branch for your installs.
>
>
You can also skip installing Ansible and checking out the repo and just use
the containerized install image
.
Tag v3.6.0 is built from the release-3.6 branch.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


alpha features

2016-07-27 Thread Luke Meyer
How do I turn on alpha features in Origin? E.g. dynamic provisioning, auto
service cert generation...
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: /etc/localtime

2016-07-08 Thread Luke Meyer
That would be nice, if clients reported their timezone. Web browsers don't,
leaving you with UI hacks to try to determine it, or the more standard
method of having the user specify it.

Docker json files report each line's timestamp in UTC I believe, however
there's no guarantee what the servers inside will do regarding timezone,
and there's a lot more than just logs to consider... as mentioned,
scheduled/cron-type jobs are actually what inspired this question.

You don't know that the user's timezone will match the server, but it's a
good default guess.


On Fri, Jul 8, 2016 at 12:10 PM, Clayton Coleman 
wrote:

> UI should be using the client's local timezone, so that's not really a
> problem.  No server should ever be translating output to its local timezone.
>
> On Fri, Jul 8, 2016 at 11:27 AM, Brandon Richins <
> brandon.rich...@imail.org> wrote:
>
>> I agree about the UI for the caller.  However, in some circumstances, 99%
>> of the business use cases and customers are in the same timezone as the
>> servers running the apps.  if the UI is generated on a server, like old
>> servlet JEE tech, then having the app timezone set (regardless of client
>> timezone) may be useful.  I can also see a case for scheduled/cron-like
>> jobs being more readable with an assumed timezone.
>>
>>
>>
>> *Brandon Richins*
>>
>>
>>
>> *From: *Clayton Coleman 
>> *Date: *Friday, July 8, 2016 at 8:56 AM
>> *To: *Luke Meyer 
>> *Cc: *Brandon Richins , dev <
>> dev@lists.openshift.redhat.com>
>> *Subject: *Re: /etc/localtime
>>
>>
>>
>> Shouldn't logs be written to UTC and the UI of the caller be used for
>> that?
>>
>>
>>
>> I would expect all the stored data to be normalized correctly when shown.
>>
>>
>> On Jul 8, 2016, at 10:49 AM, Luke Meyer  wrote:
>>
>> If you can docker run as shown, sure, you can mount in the appropriate
>> thing for your container distro, or set an env var. I'm looking for a more
>> generic addition to the OpenShift Origin container environment. When you
>> "oc new-app" a template you don't know what timezone the resulting node
>> will have, and you don't particularly want to require the hostmount SCC
>> just for that. Since the distro in the container could be looking at
>> different files, I thought it would be a good to have kubernetes add the
>> timezone into a known env var. The container doesn't necessarily have to
>> use it but that way it could choose e.g. to write logs with a timezone that
>> matches the host, or to offer a good UI default for the administrator's
>> timezone.
>>
>>
>>
>> On Wed, Jul 6, 2016 at 2:40 PM, Brandon Richins <
>> brandon.rich...@imail.org> wrote:
>>
>> It looks like this could be a complicated issue.  I searched around a
>> little because a colleague of mine had some timezone issues with Docker
>> lately.  I think each distro may have its own way of doing timezones.  Many
>> seem to share the /etc/timezone, /etc/localtime, and /usr/share/zoneinfo
>> files/folders.  Alpine doesn’t seem to come with timezone data in their
>> base image.
>>
>>
>>
>> It appears to me that the kernel keeps time in UTC and therefore Docker
>> (by default) will use UTC for its containers.  I’ve seen posts to either
>> export the TZ environment variable or to use host mounts.
>>
>>
>>
>>
>> http://olavgg.com/post/117506310248/docker-how-to-fix-date-and-timezone-issues
>>
>> sudo docker run --rm -it \
>>
>>   -v /etc/localtime:/etc/localtime:ro \
>>
>>   -v /etc/timezone:/etc/timezone:ro \
>>
>>   --name my_container debian:jessie date
>>
>>
>>
>> Please correct me if I’m wrong.
>>
>>
>>
>> *Brandon Richins*
>>
>>
>>
>> *From: * on behalf of Luke Meyer
>> 
>> *Date: *Wednesday, July 6, 2016 at 11:27 AM
>> *To: *dev 
>> *Subject: */etc/localtime
>>
>>
>>
>> Is there a simple way to find out the host's local timezone without
>> having to mount /etc/localtime (which is pretty painful given it requires
>> hostmount)? Could there be some way it's passed in as an env var or
>> something?
>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: /etc/localtime

2016-07-08 Thread Luke Meyer
If you can docker run as shown, sure, you can mount in the appropriate
thing for your container distro, or set an env var. I'm looking for a more
generic addition to the OpenShift Origin container environment. When you
"oc new-app" a template you don't know what timezone the resulting node
will have, and you don't particularly want to require the hostmount SCC
just for that. Since the distro in the container could be looking at
different files, I thought it would be a good to have kubernetes add the
timezone into a known env var. The container doesn't necessarily have to
use it but that way it could choose e.g. to write logs with a timezone that
matches the host, or to offer a good UI default for the administrator's
timezone.

On Wed, Jul 6, 2016 at 2:40 PM, Brandon Richins 
wrote:

> It looks like this could be a complicated issue.  I searched around a
> little because a colleague of mine had some timezone issues with Docker
> lately.  I think each distro may have its own way of doing timezones.  Many
> seem to share the /etc/timezone, /etc/localtime, and /usr/share/zoneinfo
> files/folders.  Alpine doesn’t seem to come with timezone data in their
> base image.
>
>
>
> It appears to me that the kernel keeps time in UTC and therefore Docker
> (by default) will use UTC for its containers.  I’ve seen posts to either
> export the TZ environment variable or to use host mounts.
>
>
>
>
> http://olavgg.com/post/117506310248/docker-how-to-fix-date-and-timezone-issues
>
> sudo docker run --rm -it \
>
>   -v /etc/localtime:/etc/localtime:ro \
>
>   -v /etc/timezone:/etc/timezone:ro \
>
>   --name my_container debian:jessie date
>
>
>
> Please correct me if I’m wrong.
>
>
>
> *Brandon Richins*
>
>
>
> *From: * on behalf of Luke Meyer <
> lme...@redhat.com>
> *Date: *Wednesday, July 6, 2016 at 11:27 AM
> *To: *dev 
> *Subject: */etc/localtime
>
>
>
> Is there a simple way to find out the host's local timezone without having
> to mount /etc/localtime (which is pretty painful given it requires
> hostmount)? Could there be some way it's passed in as an env var or
> something?
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


/etc/localtime

2016-07-06 Thread Luke Meyer
Is there a simple way to find out the host's local timezone without having
to mount /etc/localtime (which is pretty painful given it requires
hostmount)? Could there be some way it's passed in as an env var or
something?
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: packaging

2016-06-29 Thread Luke Meyer
Err yeah, https://github.com/openshift/origin/blob/master/origin.spec looks
promising. For some reason I was expecting it in hack/

On Wed, Jun 29, 2016 at 12:07 PM, Clayton Coleman 
wrote:

> The spec file checked in to the repo is the same one that is used to build
> those RPMs, isn't it?
>
> On Jun 29, 2016, at 8:12 AM, Luke Meyer  wrote:
>
> The origin project itself doesn't maintain spec files. However you might
> find the Fedora and EPEL source rpms interesting:
>
> Fedora -
> https://kojipkgs.fedoraproject.org//packages/origin/1.2.0/1.git.0.2e62fab.fc24/src/origin-1.2.0-1.git.0.2e62fab.fc24.src.rpm
> CentOS/EPEL -
> http://cbs.centos.org/kojifiles/packages/origin/1.2.0/4.el7/src/origin-1.2.0-4.el7.src.rpm
>
> On Mon, Jun 27, 2016 at 8:23 AM, Cameron Braid 
> wrote:
>
>> Hi,
>>
>> I'd like to build my own src.rpm for openshift origin (v1.3.0-alpha.2),
>> but I can't find where the relevant build/packaging scripts are.
>>
>> Cameron
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: packaging

2016-06-29 Thread Luke Meyer
The origin project itself doesn't maintain spec files. However you might
find the Fedora and EPEL source rpms interesting:

Fedora -
https://kojipkgs.fedoraproject.org//packages/origin/1.2.0/1.git.0.2e62fab.fc24/src/origin-1.2.0-1.git.0.2e62fab.fc24.src.rpm
CentOS/EPEL -
http://cbs.centos.org/kojifiles/packages/origin/1.2.0/4.el7/src/origin-1.2.0-4.el7.src.rpm

On Mon, Jun 27, 2016 at 8:23 AM, Cameron Braid 
wrote:

> Hi,
>
> I'd like to build my own src.rpm for openshift origin (v1.3.0-alpha.2),
> but I can't find where the relevant build/packaging scripts are.
>
> Cameron
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift Origin v1.3.0-alpha.2 has been released

2016-06-23 Thread Luke Meyer
docker.io/openshift/origin-logging-auth-proxy does not seem to have the
v1.3.0-alpha.2 tag (or .1 for that matter). The other logging images all
have the tag. auth-proxy was added to the build/release scripts in the last
few weeks as it was brought into git as a submodule; perhaps an older
script version was used in those releases?

There haven't been actual changes to the image, it just needs the tag.

On Wed, Jun 22, 2016 at 10:42 PM, Clayton Coleman 
wrote:

> The release notes are up on GitHub
> https://github.com/openshift/origin/releases/tag/v1.3.0-alpha.2 along
> with binaries.  Improvements to the console UI have been the primary focus.
>
> Please report any issues you encounter!
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: fsGroup vs. supplementalGroups

2016-06-23 Thread Luke Meyer
On Wed, Jun 22, 2016 at 12:14 PM, Alan Jones  wrote:

> I have a configuration for a PV/PVC with a block device that works in the
> default namespace with the fsGroup tag in the pod spec's securityContext.
> I was able to create the pod in a non-default namespace with combination
> of 'openshift.io/scc: restricted' and a supplementalGroups tag with the
> same value; but this gave the firmilar permission denied error trying to
> write to the new directory.
>
> https://docs.openshift.com/enterprise/3.2/install_config/storage_examples/shared_storage.html
> Note, my image is not being built by OpenShift and has a particular user
> and group that runs out of the box.
> 1) Can you configure persistent block device storage for non-default
> projects?
>

PVs don't care what project they're used with, so yes. Project is not
important here, but service account being a member of the right SCC does if
you're trying to specify securityContext.


> 2) Do you need to build the container image for this configuration?
>

The container should generally be none the wiser as to how its storage is
supplied.


> 3) Is support required in the volume driver to interpret
> 'supplementalGroups' separate from 'fsGroup'?
> (I don't see any reference to 'supplementalGroups' in k8s volume code
> where I do see 'fsGroup'.)
>

Don't know. I think supplementalGroups is an OpenShift addition. Note under:
https://docs.openshift.com/enterprise/3.2/install_config/persistent_storage/pod_security_context.html#supplemental-groups
"The *supplementalGroups* IDs are typically used for controlling access to
shared storage, such as NFS and GlusterFS, whereas fsGroup

is
used for controlling access to block storage, such as Ceph RBD and iSCSI."
I don't know if this means supplemental groups are *ignored* for the
purposes of block storage...



> Thank you!
> Alan
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: "manifest unknown" test flakes

2016-06-08 Thread Luke Meyer
It's not a flake for me, it's happening every time. I just fired up a
Centos 7 devenv AMI and it had docker 1.9. And FWIW it doesn't look like
Fedora 23 has shipped 1.10 yet either.

On Tue, Jun 7, 2016 at 11:07 PM, Clayton Coleman 
wrote:

> Yes, but you do not need v2 metadata.
>
> I just saw this flake again on a PR.  We need to dig into this but it
> seems to be happening more on 1.10.
>
> On Jun 7, 2016, at 10:54 PM, Steve Kuznetsov  wrote:
>
> The image needs to be pushed with a Docker client 1.10+ to generate the V2
> metadata if I understand it correctly.
>
> Steve
> On Jun 7, 2016 10:27 PM, "Ben Parees"  wrote:
>
>>
>>
>> On Tue, Jun 7, 2016 at 9:48 PM, Clayton Coleman 
>> wrote:
>>
>>> We are building and pushing our images with Docker 1.10 for Origin (but
>>> possibly not in all AMIs, depending on the test job).  The last set of
>>> pushed images (alpha.1) was pushed from Docker 1.9.
>>>
>>> This might just be docker flaking, but we'd probably need more info from
>>> the Docker logs.
>>>
>>
>> ​is the issue here that that ruby-22-centos7 image itself needs to be
>> rebuild/pushed from docker 1.10?
>>
>> if so (and if our AMIs are on docker 1.10) i can kick off a fresh round
>> of our s2i image build/push job to refresh everything.
>> ​
>>
>>
>>
>>>
>>> On Tue, Jun 7, 2016 at 9:41 PM, Luke Meyer  wrote:
>>>
>>>> I've seen this twice in logging tests, in the ruby STI builder today
>>>> and in the origin image last week, so I'm wondering if it's a trend.
>>>>
>>>>
>>>> https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/381/artifact/origin/artifacts/logs/container-ruby-sample-build-1-build-sti-build.log
>>>>
>>>> Pulling Docker image 
>>>> centos/ruby-22-centos7@sha256:e5ebb014d02b5a7faa7b83f6c58f4cc4eb9892edbf61172e59f2ae182dc2
>>>> ...
>>>> I0608 01:08:34.707737   1 glog.go:50] An error was received from
>>>> the PullImage call: manifest unknown: manifest unknown
>>>>
>>>> The closest thing I could find under issues is
>>>> https://github.com/openshift/origin/issues/9122 where it was
>>>> reportedly due to the image being pushed by docker 1.10.
>>>>
>>>> Is someone pushing our images to dockerhub with docker 1.10?
>>>>
>>>> Builds run on CentOS7, don't think that AMI has docker 1.10 yet.
>>>>
>>>> ___
>>>> dev mailing list
>>>> dev@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>>
>>>>
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Ben Parees | OpenShift
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: "manifest unknown" test flakes

2016-06-07 Thread Luke Meyer
With the origin image last week, it was persistent until we specified
building from the v1.2.0 tag instead of "latest". With this image it has
happened twice in a row. I'll try another...
https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/382/

May be able to reproduce manually, we will see. What would it look like if
docker is flaking?

On Tue, Jun 7, 2016 at 9:48 PM, Clayton Coleman  wrote:

> We are building and pushing our images with Docker 1.10 for Origin (but
> possibly not in all AMIs, depending on the test job).  The last set of
> pushed images (alpha.1) was pushed from Docker 1.9.
>
> This might just be docker flaking, but we'd probably need more info from
> the Docker logs.
>
> On Tue, Jun 7, 2016 at 9:41 PM, Luke Meyer  wrote:
>
>> I've seen this twice in logging tests, in the ruby STI builder today and
>> in the origin image last week, so I'm wondering if it's a trend.
>>
>>
>> https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/381/artifact/origin/artifacts/logs/container-ruby-sample-build-1-build-sti-build.log
>>
>> Pulling Docker image 
>> centos/ruby-22-centos7@sha256:e5ebb014d02b5a7faa7b83f6c58f4cc4eb9892edbf61172e59f2ae182dc2
>> ...
>> I0608 01:08:34.707737   1 glog.go:50] An error was received from the
>> PullImage call: manifest unknown: manifest unknown
>>
>> The closest thing I could find under issues is
>> https://github.com/openshift/origin/issues/9122 where it was reportedly
>> due to the image being pushed by docker 1.10.
>>
>> Is someone pushing our images to dockerhub with docker 1.10?
>>
>> Builds run on CentOS7, don't think that AMI has docker 1.10 yet.
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


"manifest unknown" test flakes

2016-06-07 Thread Luke Meyer
I've seen this twice in logging tests, in the ruby STI builder today and in
the origin image last week, so I'm wondering if it's a trend.

https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/381/artifact/origin/artifacts/logs/container-ruby-sample-build-1-build-sti-build.log

Pulling Docker image
centos/ruby-22-centos7@sha256:e5ebb014d02b5a7faa7b83f6c58f4cc4eb9892edbf61172e59f2ae182dc2
...
I0608 01:08:34.707737   1 glog.go:50] An error was received from the
PullImage call: manifest unknown: manifest unknown

The closest thing I could find under issues is
https://github.com/openshift/origin/issues/9122 where it was reportedly due
to the image being pushed by docker 1.10.

Is someone pushing our images to dockerhub with docker 1.10?

Builds run on CentOS7, don't think that AMI has docker 1.10 yet.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


readiness probes and clustered discovery

2016-05-19 Thread Luke Meyer
We have a plugin for Elasticsearch to cluster based on looking up endpoints
on its clustering service (which runs at separate port 9300 instead of http
port 9200). But in order to be among the endpoints on a service, the
cluster members have to be considered "up"; so this must occur before they
can even discover each other. The result is that there can't be a
meaningful readiness probe, and clients of the service get back errors
until it is really up.

We could get around this if readiness probes could be honored/ignored by
specific services, or if there were some other method of indicating a more
nuanced "readiness". If the service for port 9300 could consider the
members up once in "Running" state, but the service at port 9200 waited for
a readiness check, everything would work out well.

Is this strictly a kubernetes issue? Is there any movement in this
direction? It seems like something that many clustered services would
benefit from.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Fwd: linux file caching in containers?

2016-05-18 Thread Luke Meyer
Does anyone know if Linux file caching is compartmentalized in Docker
containers or accounted for in their memory limits?

The particular context of this question is Elasticsearch:
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html#_give_less_than_half_your_memory_to_lucene

"Lucene is designed to leverage the underlying OS for caching in-memory
data structures.


Lucene
segments are stored in individual files. Because segments are immutable,
these files never change. This makes them very cache friendly, and the
underlying OS will happily keep hot segments resident in memory for faster
access."

So the question is, if I want to reserve 4GB (via JVM options) for
ElasticSearch running in a container, and 4GB for file caching for Lucene
performance, do I reserve 8GB for the container, or try to ensure that the
host the container is running on has 4GB RAM free outside the container?
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: SSL error on vagrant up in origin

2016-05-11 Thread Luke Meyer
One of the three mirror backends has an outdated cert, which we're working
on fixing. In the meantime, if you just try it again, you have a pretty
good chance of success.

On Wed, May 11, 2016 at 2:02 AM, Suraj Deshmukh  wrote:

> Hi,
>
>
> When doing `vagrant up` on the origin root directory, ssl cert expired
> error.
> I got it working by editing Vagrantfile to use http. What is the
> better/right way of doing it?
>
>
> ```
> $ pwd
> /home/xyz/go/src/github.com/openshift/origin
>
> $ vagrant up
> Bringing machine 'openshiftdev' up with 'libvirt' provider...
> ==> openshiftdev: Box 'fedora_inst' could not be found. Attempting to
> find and install...
> openshiftdev: Box Provider: libvirt
> openshiftdev: Box Version: >= 0
> ==> openshiftdev: Box file was not detected as metadata. Adding it
> directly...
> ==> openshiftdev: Adding box 'fedora_inst' (v0) for provider: libvirt
> openshiftdev: Downloading:
>
> https://mirror.openshift.com/pub/vagrant/boxes/openshift3/fedora_libvirt_inst.box
> An error occurred while downloading the remote file. The error
> message, if any, is reproduced below. Please fix this error and try
> again.
>
> Peer's Certificate has expired.
> More details here: http://curl.haxx.se/docs/sslcerts.html
>
> curl performs SSL certificate verification by default, using a "bundle"
>  of Certificate Authority (CA) public keys (CA certs). If the default
>  bundle file isn't adequate, you can specify an alternate file
>  using the --cacert option.
> If this HTTPS server uses a certificate signed by a CA represented in
>  the bundle, the certificate verification probably failed due to a
>  problem with the certificate (it might be expired, or the name might
>  not match the domain name in the URL).
> If you'd like to turn off curl's verification of the certificate, use
>  the -k (or --insecure) option.
> ```
>
> --
> - Suraj Deshmukh (surajd)
>
> https://deshmukhsuraj.wordpress.com
> https://twitter.com/surajd_
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: binary source in a Custom type build

2016-05-09 Thread Luke Meyer
Works perfectly, thanks.

If anyone is curious,
https://github.com/openshift/origin-apiman/blob/ee1da0249ed095cde0727e6c461cc913f8fdeb73/apiman-builder/build.sh#L20

On Thu, May 5, 2016 at 4:37 PM, Ben Parees  wrote:

> i believe the content is being streamed into your stdin.  so your custom
> image would need to read stdin as a tar stream.
>
> On Thu, May 5, 2016 at 4:31 PM, Luke Meyer  wrote:
>
>> How in a custom builder do you retrieve binary build content (from e.g.
>> the --from-dir flag)?
>> https://docs.openshift.org/latest/dev_guide/builds.html#binary-source
>> does not seem to give any clues. SOURCE_URI comes in blank. Is there a
>> secret handshake I'm missing?
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


binary source in a Custom type build

2016-05-05 Thread Luke Meyer
How in a custom builder do you retrieve binary build content (from e.g. the
--from-dir flag)?
https://docs.openshift.org/latest/dev_guide/builds.html#binary-source does
not seem to give any clues. SOURCE_URI comes in blank. Is there a secret
handshake I'm missing?
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Three-tier application deployment on OpenShift origin

2016-05-05 Thread Luke Meyer
It's just that the "zone=" label is discussed in our example scheduler
configs
<https://docs.openshift.com/enterprise/3.1/admin_guide/scheduler.html#use-cases>
used for service spreading so it has a technical significance. Using "env="
would be fine.

On Wed, May 4, 2016 at 11:41 AM, Erik Jacobs  wrote:

> Hi Luke,
>
> I'll have to disagree but only semantically.
>
> For a small environment and without changing the scheduler config, the
> concept of "zone" can be used. Yes, I would agree with you that in a real
> production environment the Red Hat concept of a "zone" is as you described.
>
> You could additionally label nodes with something like "env=appserver" and
> use nodeselectors on that. This is probably a more realistic production
> expectation.
>
> For the purposes of getting Abdala's small environment going, I guess it
> doesn't much "matter"...
>
>
> Erik M Jacobs, RHCA
> Principal Technical Marketing Manager, OpenShift Enterprise
> Red Hat, Inc.
> Phone: 646.462.3745
> Email: ejac...@redhat.com
> AOL Instant Messenger: ejacobsatredhat
> Twitter: @ErikonOpen
> Freenode: thoraxe
>
> On Wed, May 4, 2016 at 11:36 AM, Luke Meyer  wrote:
>
>>
>>
>> On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs  wrote:
>>
>>> Hi Olga,
>>>
>>> Some responses inline/
>>>
>>>
>>> Erik M Jacobs, RHCA
>>> Principal Technical Marketing Manager, OpenShift Enterprise
>>> Red Hat, Inc.
>>> Phone: 646.462.3745
>>> Email: ejac...@redhat.com
>>> AOL Instant Messenger: ejacobsatredhat
>>> Twitter: @ErikonOpen
>>> Freenode: thoraxe
>>>
>>> On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga 
>>> wrote:
>>>
>>>> Hello all,
>>>>
>>>>
>>>>
>>>> I am done with my *origin advanced installation* (thanks to your
>>>> useful help) which architecture is composed of *4 virtualized servers* (on
>>>> the same network):
>>>>
>>>> -   1  Master
>>>>
>>>> -   2 Nodes
>>>>
>>>> -   1 VM hosting Ansible
>>>>
>>>>
>>>>
>>>> My next steps are to implement/test some use cases with a *three-tier
>>>> App*(each App’s tier being hosted on a different VM):
>>>>
>>>> -   The * horizontal scalability*;
>>>>
>>>> -   The * load-balancing* of the Nodes : Keep the system running
>>>> even if one of the VMs goes down;
>>>>
>>>> -   App’s monitoring using *Origin API*: Allow the Origin API to
>>>> “tell” the App on which VM is hosted each tier. (I still don’t know how to
>>>> test that though…)
>>>>
>>>>
>>>>
>>>> There are some * notions* that are still not clear to me:
>>>>
>>>> -   From my web console, how can I know *on which Node has my App
>>>> been deployed*?
>>>>
>>>
>>> If you look in the Browse -> Pods -> select a pod, you should see the
>>> node where the pod is running.
>>>
>>>
>>>> -   How can I put *each component of my App* on a *separated Node*?
>>>>
>>>> -   How does the “*zones*” concept in origin work?
>>>>
>>>
>>> These two are closely related.
>>>
>>> 1) In your case it sounds like you would want a zone for each tier:
>>> appserver, web server, db
>>> 2) This would require a node with a label of, for example, zone=appserver
>>> 3) When you create your pod (or replication controller, or deployment
>>> config) you would want to specify, via a nodeselector, which zone you want
>>> the pod(s) to land in
>>>
>>>
>> This is not the concept of zones. The point of zones is to spread
>> replicas between different zones in order to improve HA (for instance,
>> define a zone per rack, thereby ensuring that taking down a rack doesn't
>> take down your app that's scaled across multiple zones).
>>
>> This isn't what you want though. And you'd certainly never put a zone in
>> a nodeselector for an RC if you're trying to scale it to multiple zones.
>>
>> For the purpose of separating the tiers of your app, you would still want
>> to use a nodeselector per DC or RC and corresponding node labels. There's
>> no other way to designate where you want the pods from different

Re: Three-tier application deployment on OpenShift origin

2016-05-04 Thread Luke Meyer
On Tue, May 3, 2016 at 10:57 AM, Erik Jacobs  wrote:

> Hi Olga,
>
> Some responses inline/
>
>
> Erik M Jacobs, RHCA
> Principal Technical Marketing Manager, OpenShift Enterprise
> Red Hat, Inc.
> Phone: 646.462.3745
> Email: ejac...@redhat.com
> AOL Instant Messenger: ejacobsatredhat
> Twitter: @ErikonOpen
> Freenode: thoraxe
>
> On Mon, Apr 25, 2016 at 9:34 AM, ABDALA Olga 
> wrote:
>
>> Hello all,
>>
>>
>>
>> I am done with my *origin advanced installation* (thanks to your useful
>> help) which architecture is composed of *4 virtualized servers* (on the
>> same network):
>>
>> -   1  Master
>>
>> -   2 Nodes
>>
>> -   1 VM hosting Ansible
>>
>>
>>
>> My next steps are to implement/test some use cases with a *three-tier
>> App*(each App’s tier being hosted on a different VM):
>>
>> -   The * horizontal scalability*;
>>
>> -   The * load-balancing* of the Nodes : Keep the system running
>> even if one of the VMs goes down;
>>
>> -   App’s monitoring using *Origin API*: Allow the Origin API to
>> “tell” the App on which VM is hosted each tier. (I still don’t know how to
>> test that though…)
>>
>>
>>
>> There are some * notions* that are still not clear to me:
>>
>> -   From my web console, how can I know *on which Node has my App
>> been deployed*?
>>
>
> If you look in the Browse -> Pods -> select a pod, you should see the node
> where the pod is running.
>
>
>> -   How can I put *each component of my App* on a *separated Node*?
>>
>> -   How does the “*zones*” concept in origin work?
>>
>
> These two are closely related.
>
> 1) In your case it sounds like you would want a zone for each tier:
> appserver, web server, db
> 2) This would require a node with a label of, for example, zone=appserver
> 3) When you create your pod (or replication controller, or deployment
> config) you would want to specify, via a nodeselector, which zone you want
> the pod(s) to land in
>
>
This is not the concept of zones. The point of zones is to spread replicas
between different zones in order to improve HA (for instance, define a zone
per rack, thereby ensuring that taking down a rack doesn't take down your
app that's scaled across multiple zones).

This isn't what you want though. And you'd certainly never put a zone in a
nodeselector for an RC if you're trying to scale it to multiple zones.

For the purpose of separating the tiers of your app, you would still want
to use a nodeselector per DC or RC and corresponding node labels. There's
no other way to designate where you want the pods from different RCs to
land. You just don't want "zones".



> This stuff is scattered throughout the docs:
>
>
> https://docs.openshift.org/latest/admin_guide/manage_nodes.html#updating-labels-on-nodes
>
> https://docs.openshift.org/latest/dev_guide/deployments.html#assigning-pods-to-specific-nodes
>
> I hope this helps.
>
>
>>
>>
>> Content of /etc/ansible/hosts of my Ansible hosting VM:
>>
>> [masters]
>>
>> sv5305.selfdeploy.loc
>>
>> # host group for nodes, includes region info
>>
>> [nodes]
>>
>> sv5305.selfdeploy.loc openshift_node_labels="{'region': 'infra', 'zone':
>> 'default'}" openshift_schedulable=false
>>
>> sv5306.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>> 'zone': 'east'}"
>>
>> sv5307.selfdeploy.loc openshift_node_labels="{'region': 'primary',
>> 'zone': 'west'}"
>>
>>
>>
>> Thank you in advance.
>>
>>
>>
>> Regards,
>>
>>
>>
>> Olga
>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Excluding replacement pods from quota?

2016-05-02 Thread Luke Meyer
Use the Recreate deploy strategy rather than Rolling.
https://docs.openshift.org/latest/dev_guide/deployments.html#recreate-strategy

On Sat, Apr 30, 2016 at 10:24 PM, Andrew Lau  wrote:

> Hi,
>
> Is there a way to have the old pod moved into the terminating scope? Or is
> there an alternative solution for the following use case:
>
> User has the following quota:
> 1 pod in terminating scope
> 1 pod in non-terminating scope
>
> For new builds, the build will complete in the terminating scope but the
> replacement pod will not be able to start due to the quota.
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift containerized installation and EFK logging

2016-02-18 Thread Luke Meyer
We're talking (I thought) about where the node is deployed as a container.
So this is at docker run time (or atomic).

Scott jumped on this and worked up a PR for the containerized install.
https://github.com/openshift/origin/pull/7398

On Thu, Feb 18, 2016 at 12:14 PM, Erik Jacobs  wrote:

> So would this be done via an "oc volume" command?
>
> Can this not be made part of the template with a flag?
>
>
> Erik M Jacobs, RHCA
> Principal Technical Marketing Manager, OpenShift Enterprise
> Red Hat, Inc.
> Phone: 646.462.3745
> Email: ejac...@redhat.com
> AOL Instant Messenger: ejacobsatredhat
> Twitter: @ErikonOpen
> Freenode: thoraxe
>
> On Wed, Feb 17, 2016 at 1:48 PM, Luke Meyer  wrote:
>
>> For the containerized install, I am not sure it is documented that you
>> need to mount /var/log into each node container for fluentd to be able to
>> get to logs.
>>
>> On Sat, Feb 6, 2016 at 11:32 AM, Akram Ben Aissi <
>> akram.benai...@gmail.com> wrote:
>>
>>> Hi guys,
>>>
>>> I am running a containarized install of OpenShift, and after the
>>> successful installation of the log centralization with EFK using docs, I
>>> can get any log search results from Kibana: Every result is empty.
>>>
>>> I guess that logging may rely on nodes journalctl output, which is not
>>> populated apparently in case of container install.
>>>
>>> Do you have any tips to make it work: a different configuration for the
>>> logging template? or a way to redirect the openshift container journalctl
>>> to the node journalctl?
>>>
>>> Greetings
>>> Akram
>>>
>>>
>>> ___
>>> dev mailing list
>>> dev@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>>
>>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: OpenShift containerized installation and EFK logging

2016-02-17 Thread Luke Meyer
For the containerized install, I am not sure it is documented that you need
to mount /var/log into each node container for fluentd to be able to get to
logs.

On Sat, Feb 6, 2016 at 11:32 AM, Akram Ben Aissi 
wrote:

> Hi guys,
>
> I am running a containarized install of OpenShift, and after the
> successful installation of the log centralization with EFK using docs, I
> can get any log search results from Kibana: Every result is empty.
>
> I guess that logging may rely on nodes journalctl output, which is not
> populated apparently in case of container install.
>
> Do you have any tips to make it work: a different configuration for the
> logging template? or a way to redirect the openshift container journalctl
> to the node journalctl?
>
> Greetings
> Akram
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: pod DNS change

2016-01-20 Thread Luke Meyer
I think I actually have a different problem now. The problem is just with
builds; the sub-container running the build doesn't get the same DNS
information (the SkyDNS service IP isn't inserted) and more importantly it
can't reach any of the nameservers it does inherit from the node. But
that's only in my RPM-based environment. If I use a from-source dev
environment, there doesn't even seem to be a sub-container, and builds run
normally. Did behavior change on this recently?

On Fri, Jan 15, 2016 at 11:31 AM, Clayton Coleman 
wrote:

> This is so DNS is HA.  Not sure why you can' get through the firewall.
>
> On Fri, Jan 15, 2016 at 11:27 AM, Luke Meyer  wrote:
> > I rebuilt my dev cluster from HEAD recently and pods were having DNS
> > problems. I'm set up with dnsmasq at port 53 on the master, forwarding
> > cluster requests to SkyDNS running at port 8053, per
> >
> https://developerblog.redhat.com/2015/11/19/dns-your-openshift-v3-cluster/
> >
> > I discovered that pods are now getting the kubernetes service IP
> (172.30.0.1
> > by default) instead of the master IP like they used to. If I inspect that
> > service, I see this:
> >
> > $ oc describe service/kubernetes --namespace default
> > Name:   kubernetes
> > Namespace:  default
> > Labels: component=apiserver,provider=kubernetes
> > Selector:   
> > Type:   ClusterIP
> > IP: 172.30.0.1
> > Port:   https   443/TCP
> > Endpoints:  172.16.4.29:8443
> > Port:   dns 53/UDP
> > Endpoints:  172.16.4.29:8053
> > Port:   dns-tcp 53/TCP
> > Endpoints:  172.16.4.29:8053
> > Session Affinity:   None
> > No events.
> >
> > So there's my problem - DNS requests are presumably being forwarded to
> the
> > master IP, but at port 8053. This port isn't open, but even if I add a
> > firewall rule to open it, it doesn't seem to connect (dig request times
> > out). Also I didn't really want to make requests directly against SkyDNS,
> > because I want my dnsmasq server to answer queries (from node or pod)
> about
> > my rogue domain names as well as cluster addresses.
> >
> > I think I could solve it by just running dnsmasq on a different server
> and
> > including it in /etc/resolv.conf everywhere. I'll try that. But that
> seems
> > like it shouldn't be necessary. Any thoughts on this change? Why was it
> > necessary?
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


pod DNS change

2016-01-15 Thread Luke Meyer
I rebuilt my dev cluster from HEAD recently and pods were having DNS
problems. I'm set up with dnsmasq at port 53 on the master, forwarding
cluster requests to SkyDNS running at port 8053, per
https://developerblog.redhat.com/2015/11/19/dns-your-openshift-v3-cluster/

I discovered that pods are now getting the kubernetes service IP
(172.30.0.1 by default) instead of the master IP like they used to. If I
inspect that service, I see this:

$ oc describe service/kubernetes --namespace default
Name:   kubernetes
Namespace:  default
Labels: component=apiserver,provider=kubernetes
Selector:   
Type:   ClusterIP
IP: 172.30.0.1
Port:   https   443/TCP
Endpoints:  172.16.4.29:8443
Port:   dns 53/UDP
Endpoints:  172.16.4.29:8053
Port:   dns-tcp 53/TCP
Endpoints:  172.16.4.29:8053
Session Affinity:   None
No events.

So there's my problem - DNS requests are presumably being forwarded to the
master IP, but at port 8053. This port isn't open, but even if I add a
firewall rule to open it, it doesn't seem to connect (dig request times
out). Also I didn't really want to make requests directly against SkyDNS,
because I want my dnsmasq server to answer queries (from node or pod) about
my rogue domain names as well as cluster addresses.

I think I could solve it by just running dnsmasq on a different server and
including it in /etc/resolv.conf everywhere. I'll try that. But that seems
like it shouldn't be necessary. Any thoughts on this change? Why was it
necessary?
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: SDN config for HEAD

2016-01-12 Thread Luke Meyer
On Tue, Jan 12, 2016 at 1:27 PM, Brenton Leanhardt 
wrote:

> On Tue, Jan 12, 2016 at 11:50 AM, Luke Meyer  wrote:
> > Right on target, thanks.
> >
> > Manually fixing things up as suggested seems to work (also needed: oadm
> > policy reconcile-cluster-roles --confirm)... though I'm not sure, things
> > aren't going perfectly and I'm seeing some errors in the node logs.
> >
> > I'm trying the RPM build now. If you just `rpmbuild -bb origin.spec` it
> > complains the source tgz isn't in SOURCES. Since I don't know what
> should be
> > included in SOURCES I'm guessing tito is supposed to handle all this.
> But we
> > don't have any tito tags created. This seems a little involved. Do we
> have
> > decent instructions for building RPMs from source?
>
> Does 'tito build --rpm --test' work?
>

Sort of. Without a tito tag it gets you version 0.0.1, and tito tag was
giving me fits with no existing tags. I managed to get it going with "tito
tag --keep-version --auto-changelog-message=MESSAGE" though.


>
> I'm CC'ing Troy who may know more about how the Origin RPMs are built
> in the copr.
>
> >
> >
> >
> > On Tue, Jan 12, 2016 at 10:02 AM, Dan Winship  wrote:
> >>
> >> On 01/12/2016 09:31 AM, Luke Meyer wrote:
> >> > I ran an "advanced install" of Origin (which installs and configures
> >> > RPMs from early December), then updated the openshift binary to be
> >> > compiled from master. Perhaps not surprisingly, my nodes won't come up
> >> > now:
> >>
> >> Yes, not surprisingly. If you build new RPMs from origin git and update
> >> to them, rather than just copying in the new openshift binary, then it
> >> ought to work.
> >>
> >> (You can also try just copying
> >>
> >> Godeps/_workspace/src/
> github.com/openshift/openshift-sdn/plugins/osdn/ovs/bin/*
> >> to /usr/bin/. Depending on how old your RPMs were you may also need to
> >> rm -rf
> >>
> >>
> /usr/libexec/kubernetes/kubelet-plugins/net/exec/redhat~openshift-ovs-subnet.)
> >>
> >> -- Dan
> >>
> >
> >
> > ___
> > dev mailing list
> > dev@lists.openshift.redhat.com
> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
> >
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: SDN config for HEAD

2016-01-12 Thread Luke Meyer
Right on target, thanks.

Manually fixing things up as suggested seems to work (also needed: oadm
policy reconcile-cluster-roles --confirm)... though I'm not sure, things
aren't going perfectly and I'm seeing some errors in the node logs.

I'm trying the RPM build now. If you just `rpmbuild -bb origin.spec` it
complains the source tgz isn't in SOURCES. Since I don't know what should
be included in SOURCES I'm guessing tito is supposed to handle all this.
But we don't have any tito tags created. This seems a little involved. Do
we have decent instructions for building RPMs from source?



On Tue, Jan 12, 2016 at 10:02 AM, Dan Winship  wrote:

> On 01/12/2016 09:31 AM, Luke Meyer wrote:
> > I ran an "advanced install" of Origin (which installs and configures
> > RPMs from early December), then updated the openshift binary to be
> > compiled from master. Perhaps not surprisingly, my nodes won't come up
> now:
>
> Yes, not surprisingly. If you build new RPMs from origin git and update
> to them, rather than just copying in the new openshift binary, then it
> ought to work.
>
> (You can also try just copying
> Godeps/_workspace/src/
> github.com/openshift/openshift-sdn/plugins/osdn/ovs/bin/*
> to /usr/bin/. Depending on how old your RPMs were you may also need to
> rm -rf
>
> /usr/libexec/kubernetes/kubelet-plugins/net/exec/redhat~openshift-ovs-subnet.)
>
> -- Dan
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


SDN config for HEAD

2016-01-12 Thread Luke Meyer
I ran an "advanced install" of Origin (which installs and configures RPMs
from early December), then updated the openshift binary to be compiled from
master. Perhaps not surprisingly, my nodes won't come up now:

I0111 21:36:41.7568711259 start_node.go:178] Starting a node connected
to https://master.osv3.example.com:8443
I0111 21:36:41.7694141259 plugins.go:71] No cloud provider specified.
I0111 21:36:41.8288231259 start_node.go:255] Starting node
master.osv3.example.com (v1.1-743-ge51ef9e)
I0111 21:36:41.8358581259 node.go:54] Connecting to Docker at
unix:///var/run/docker.sock
I0111 21:36:41.8415821259 manager.go:128] cAdvisor running in
container: "/"
E0111 21:36:41.9554641259 controller.go:133] Failed to configure docker
networking: exec: "openshift-sdn-docker-setup.sh": executable file not
found in $PATH
F0111 21:36:41.9619271259 node.go:173] SDN Node failed: Failed to start
plugin: exec: "openshift-sdn-docker-setup.sh": executable file not found in
$PATH

I assume there's just some packaging/config change that either hasn't made
it into the Origin docs yet or I don't know where to look. Pointers to the
right information would be appreciated.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: logging service in openshift

2016-01-11 Thread Luke Meyer
No, and there aren't any plans yet to use/provide logstash.

On Sat, Jan 9, 2016 at 1:47 AM, priyanka Gupta  wrote:

> Hi Luke,
>
> Thanks for explaination, so logstash is not available in openshift 3 yet??
>
> Thanks!
>
>
> On Friday, January 8, 2016, Jimmi Dyson  wrote:
>
>> On 8 January 2016 at 15:14, Luke Meyer  wrote:
>> >
>> >
>> > On Fri, Jan 8, 2016 at 10:08 AM, priyanka Gupta
>> >  wrote:
>> >>
>> >> Hi, I have deployed EFK in openshift 3.1, In doc it is mentoined
>> >> "Unfortunately there is no way to stream logs as they are created at
>> this
>> >> time."
>> >>
>> >> how much time gap is there between log generation and display in kibana
>> >> and why??
>> >
>> >
>> > Should normally be just a few seconds. The gap consists of:
>> > 1. fluentd reading the logs from local disk (not sure how long this
>> takes,
>> > and it would depend on how active all the other container logs were too)
>> > 2. ES indexing the logs, which should just take about a second
>> > 3. kibana retrieving the logs from ES; you can have kibana auto-reload
>> > periodically
>>
>> One minor addition: logs are buffered in fluentd pod & bulk loaded
>> into Elasticsearch periodically (better performance than individual
>> requests for each log message). This is configurable in the
>> Elasticsearch plugin - not sure what we have set in our production
>> deployments, but likely to be 5-10 seconds I think.
>>
>> >
>> >>
>> >>
>> >> Is there also a template available for Logstash too?
>> >
>> >
>> > No, that has a very different deployment architecture.
>> >
>> >>
>> >>
>> >> Thanks a lot in advance!
>> >>
>> >>
>> >> On Tue, Oct 13, 2015 at 8:04 AM, priyanka Gupta
>> >>  wrote:
>> >>>
>> >>> Hi , Thanks :)
>> >>>
>> >>>
>> >>> On Sun, Oct 11, 2015 at 1:31 PM, Jimmi Dyson 
>> wrote:
>> >>>>
>> >>>> 3.1 will include a productised version of that logging solution -
>> >>>> patience!
>> >>>>
>> >>>> On 10 October 2015 at 16:09, Nakayama Kenjiro
>> >>>>  wrote:
>> >>>> > Hi,
>> >>>> >
>> >>>> > OSE v3 doesn't have the doc, but origin(upstream) doc has.
>> >>>> >
>> >>>> >
>> >>>> >
>> https://docs.openshift.org/latest/admin_guide/aggregate_logging.html#using-elasticsearch
>> >>>> >
>> >>>> > So, I (we?) am not sure that it can work on OSEv3.
>> >>>> >
>> >>>> > Thanks,
>> >>>> > Kenjiro
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > On Sat, Oct 10, 2015 at 11:38 PM, priyanka Gupta
>> >>>> >  wrote:
>> >>>> >>
>> >>>> >> Hi , can anyone please help me with these points , need to know
>> what
>> >>>> >> is
>> >>>> >> provided by OSE 3?
>> >>>> >>
>> >>>> >> Thanks much !!
>> >>>> >>
>> >>>> >>
>> >>>> >> On Friday, October 9, 2015, priyanka Gupta
>> >>>> >> 
>> >>>> >> wrote:
>> >>>> >>>
>> >>>> >>> Hi,
>> >>>> >>>
>> >>>> >>> I want to implement logging service in OSE v3, from current
>> >>>> >>> available
>> >>>> >>> docs I could see only Fluentd is supported now:
>> >>>> >>>
>> >>>> >>>
>> >>>> >>>
>> >>>> >>>
>> https://docs.openshift.com/enterprise/3.0/admin_guide/aggregate_logging.html
>> >>>> >>>
>> >>>> >>>
>> >>>> >>> I want to implement ELK (Elasticsearch, Logstash and Kibana)
>> stack
>> >>>> >>> in
>> >>>> >>> openshift , is this something we can use in running OSE v3
>> >>>> >>> environment. I
>> >>>> >>> have gone through this blog
>> >>>> >>>
>> >>>> >>>
>> https://blog.openshift.com/openshift-logs-metrics-management-logstash-graphite/
>> >>>> >>>
>> >>>> >>> But I think it is for OSE v2 only.
>> >>>> >>>
>> >>>> >>> Can anyone please tell me how to implement ELK in v3, any docs?
>> >>>> >>>
>> >>>> >>> and if not what else can you used to replace this?
>> >>>> >>>
>> >>>> >>>
>> >>>> >>> Thanks a lot in advance !
>> >>>> >>
>> >>>> >>
>> >>>> >> ___
>> >>>> >> dev mailing list
>> >>>> >> dev@lists.openshift.redhat.com
>> >>>> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >>>> >>
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > --
>> >>>> > Kenjiro NAKAYAMA 
>> >>>> > GPG Key fingerprint = ED8F 049D E67A 727D 9A44  8E25 F44B E208 C946
>> >>>> > 5EB9
>> >>>> >
>> >>>> > ___
>> >>>> > dev mailing list
>> >>>> > dev@lists.openshift.redhat.com
>> >>>> > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >>>> >
>> >>>
>> >>>
>> >>
>> >>
>> >> ___
>> >> dev mailing list
>> >> dev@lists.openshift.redhat.com
>> >> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>> >>
>> >
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev