Re: readiness probes and clustered discovery

2016-05-19 Thread Clayton Coleman
We have the basic support for this today - endpoints also contain
unready IPs.  We are adding two constructs that will enable easy
access - a DNS entry that returns all endpoints, no matter whether
they are ready or not, and an annotation on a service that instructs
the endpoints list to contain even unready endpoints.

Those will land in 1.3/3.3

> On May 19, 2016, at 12:59 PM, Luke Meyer  wrote:
>
> We have a plugin for Elasticsearch to cluster based on looking up endpoints 
> on its clustering service (which runs at separate port 9300 instead of http 
> port 9200). But in order to be among the endpoints on a service, the cluster 
> members have to be considered "up"; so this must occur before they can even 
> discover each other. The result is that there can't be a meaningful readiness 
> probe, and clients of the service get back errors until it is really up.
>
> We could get around this if readiness probes could be honored/ignored by 
> specific services, or if there were some other method of indicating a more 
> nuanced "readiness". If the service for port 9300 could consider the members 
> up once in "Running" state, but the service at port 9200 waited for a 
> readiness check, everything would work out well.
>
> Is this strictly a kubernetes issue? Is there any movement in this direction? 
> It seems like something that many clustered services would benefit from.
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


readiness probes and clustered discovery

2016-05-19 Thread Luke Meyer
We have a plugin for Elasticsearch to cluster based on looking up endpoints
on its clustering service (which runs at separate port 9300 instead of http
port 9200). But in order to be among the endpoints on a service, the
cluster members have to be considered "up"; so this must occur before they
can even discover each other. The result is that there can't be a
meaningful readiness probe, and clients of the service get back errors
until it is really up.

We could get around this if readiness probes could be honored/ignored by
specific services, or if there were some other method of indicating a more
nuanced "readiness". If the service for port 9300 could consider the
members up once in "Running" state, but the service at port 9200 waited for
a readiness check, everything would work out well.

Is this strictly a kubernetes issue? Is there any movement in this
direction? It seems like something that many clustered services would
benefit from.
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: 'oc cluster up' command

2016-05-19 Thread Rodolfo Carvalho
https://paste.fedoraproject.org/368485/66551146/

Rodolfo Carvalho

OpenShift Developer Experience

On Thu, May 19, 2016 at 3:57 PM, Cesar Wong  wrote:

> Rodolfo, can you run 'oc cluster up --loglevel=5' and put the output in a
> pastebin?
>
> On Thu, May 19, 2016 at 9:52 AM, Rodolfo Carvalho 
> wrote:
>
> I'm running on VirtualBox, fedora23 (vagrant up from origin repo)
>
>
> $ oc cluster up
> Error: unknown command "cluster" for "oc"
> Run 'oc --help' for usage.
> [vagrant@localhost ~]$ oc cluster up
> -- Checking Docker client ... OK
> -- Checking for existing OpenShift container ... OK
> -- Checking for openshift/origin image ...
> Pulling image openshift/origin:latest
> Downloading 2 layers ( 2%)
> Extracting
> Image pull comlete
> -- Checking Docker daemon configuration ... OK
> -- Checking for available ports ... OK
> -- Checking type of volume mount ...
> Using nsenter mounter for OpenShift volumes
> -- Checking Docker version ... OK
> -- Creating volume share ... OK
> -- Finding server IP ...
> Using 10.0.2.15 as the server IP
> -- Starting OpenShift container ...
> Creating initial OpenShift configuration
> Starting OpenShift using container 'origin'
> Waiting for API server to start listening
> OpenShift server started
> -- Installing registry ... error: the server could not find the requested
> resource
> error: the server could not find the requested resource
> FAIL
> Error: exit directly
>
>
>
>
> I get the same error when I'm starting the server with `openshift start`
> and try to create the registry with `oadm registry`.
>
> I've done some debugging and seems that the error happens for any resource
> defined in kube:
>
>
>
> [vagrant@localhost ~]$ oadm registry -n default
> --config=openshift.local.config/master/admin.kubeconfig
> >>> List of objects:
> {TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""},
> ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""},
> Items:[]runtime.Object{(*api.ServiceAccount)(0xc8206c5680),
> (*api.ClusterRoleBindi
> ng)(0xc82027a000), (*api.DeploymentConfig)(0xc82027a160),
> (*api.Service)(0xc82027a2c0)}}
> --> Creating registry registry ...
> error: the server could not find the requested resource
> error: rolebinding "registry-registry-role" already exists
> error: deploymentconfigs "docker-registry" already exists
> error: the server could not find the requested resource
> --> Failed
>
>
>
>
> The error lines are relative to ServiceAccount and Service (defined in
> kube), and ClusterRoleBinding and DeploymentConfig (define in OS) are ok.
>
>
>
> Can anybody help?
>
> Rodolfo Carvalho
>
> OpenShift Developer Experience
>
> On Wed, May 18, 2016 at 2:48 AM, Cesar Wong  wrote:
>
>> PR #7675  introduces a
>> new command 'oc cluster up' that allows you to start an OpenShift
>> all-in-one cluster with a configured registry, router and an initial set of
>> templates and image streams. The 'oc cluster down' command will stop the
>> cluster.
>>
>> The command can be run from any client platform we support (Windows, OS
>> X, Linux). All it requires is a valid Docker connection.
>>
>> At it's most basic, ensure Docker commands work, like 'docker ps',
>> download the 'oc' binary
>>  for your platform,
>> and run:
>>
>> $ oc cluster up
>>
>> To stop, run
>>
>> $ oc cluster down
>>
>> You can read more about other options and usage in specific situations
>> here
>> 
>> .
>>
>>
>>
>>
>> ___
>> dev mailing list
>> dev@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>>
>>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: 'oc cluster up' command

2016-05-19 Thread Rodolfo Carvalho
I'm running on VirtualBox, fedora23 (vagrant up from origin repo)


$ oc cluster up
Error: unknown command "cluster" for "oc"
Run 'oc --help' for usage.
[vagrant@localhost ~]$ oc cluster up
-- Checking Docker client ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin image ...
   Pulling image openshift/origin:latest
   Downloading 2 layers (  2%)
   Extracting
   Image pull comlete
-- Checking Docker daemon configuration ... OK
-- Checking for available ports ... OK
-- Checking type of volume mount ...
   Using nsenter mounter for OpenShift volumes
-- Checking Docker version ... OK
-- Creating volume share ... OK
-- Finding server IP ...
   Using 10.0.2.15 as the server IP
-- Starting OpenShift container ...
   Creating initial OpenShift configuration
   Starting OpenShift using container 'origin'
   Waiting for API server to start listening
   OpenShift server started
-- Installing registry ... error: the server could not find the
requested resource
error: the server could not find the requested resource
FAIL
   Error: exit directly




I get the same error when I'm starting the server with `openshift start`
and try to create the registry with `oadm registry`.

I've done some debugging and seems that the error happens for any resource
defined in kube:



[vagrant@localhost ~]$ oadm registry -n default
--config=openshift.local.config/master/admin.kubeconfig
>>> List of objects:
{TypeMeta:unversioned.TypeMeta{Kind:"", APIVersion:""},
ListMeta:unversioned.ListMeta{SelfLink:"", ResourceVersion:""},
Items:[]runtime.Object{(*api.ServiceAccount)(0xc8206c5680),
(*api.ClusterRoleBindi
ng)(0xc82027a000), (*api.DeploymentConfig)(0xc82027a160),
(*api.Service)(0xc82027a2c0)}}
--> Creating registry registry ...
error: the server could not find the requested resource
error: rolebinding "registry-registry-role" already exists
error: deploymentconfigs "docker-registry" already exists
error: the server could not find the requested resource
--> Failed




The error lines are relative to ServiceAccount and Service (defined in
kube), and ClusterRoleBinding and DeploymentConfig (define in OS) are ok.



Can anybody help?

Rodolfo Carvalho

OpenShift Developer Experience

On Wed, May 18, 2016 at 2:48 AM, Cesar Wong  wrote:

> PR #7675  introduces a new
> command 'oc cluster up' that allows you to start an OpenShift all-in-one
> cluster with a configured registry, router and an initial set of templates
> and image streams. The 'oc cluster down' command will stop the cluster.
>
> The command can be run from any client platform we support (Windows, OS X,
> Linux). All it requires is a valid Docker connection.
>
> At it's most basic, ensure Docker commands work, like 'docker ps',
> download the 'oc' binary
>  for your platform,
> and run:
>
> $ oc cluster up
>
> To stop, run
>
> $ oc cluster down
>
> You can read more about other options and usage in specific situations
> here
> 
> .
>
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>
>
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: How to specify ImageStream as a source for container

2016-05-19 Thread Tomas Nozicka
For the record:
https://github.com/openshift/origin/issues/8937

On St, 2016-05-18 at 08:13 -0400, Clayton Coleman wrote:
> Please file an issue for that on github.
> 
> > 
> > On May 18, 2016, at 7:58 AM, Tomas Nozicka 
> > wrote:
> > 
> > It's good to know that someone is working on that.
> > 
> > Except those errors I found out it has a bigger issue:
> > If you decide to use
> > "imageChangeParams": { "automatic": false
> > which is a valid choice if you do not want to re-deploy your
> > Jenkins
> > master while doing some important tasks.
> > 
> > This way you will end up with failed deployment #1 and you have to
> > manually deploy #2 but since you are triggering the build manually
> > it
> > starts from image " " and fails again. You will be stuck in this
> > loop
> > and never get it deployed this way! At least I did not.
> > 
> > 
> > > 
> > > On Út, 2016-05-17 at 10:56 -0400, Clayton Coleman wrote:
> > > There is a work going on to ensure that you don't get a failure
> > > for
> > > the first case.
> > > 
> > > On Tue, May 17, 2016 at 10:16 AM, Tomas Nozicka  > > com>
> > > wrote:
> > > > 
> > > > 
> > > > This way you will end up with deployment #2 and failed #1 with
> > > > errors
> > > > which is not nice at all, more so for the official templates.
> > > > 
> > > > ---
> > > > Back-off pulling image " "
> > > > 
> > > > Error syncing pod, skipping: failed to "StartContainer" for
> > > > "jenkins"
> > > > with ErrImagePull: "API error (500): repository name component
> > > > must
> > > > match \"[a-z0-9](?:-*[a-z0-9])*(?:[._][a-z0-9](?:-*[a-z0-
> > > > 9])*)*\"\n"
> > > > 
> > > > Failed to pull image " ": API error (500): repository name
> > > > component
> > > > must match "[a-z0-9](?:-*[a-z0-9])*(?:[._][a-z0-9](?:-*[a-z0-
> > > > 9])*)*"
> > > > ---
> > > > 
> > > > Is there some proper solution?
> > > > 
> > > > > 
> > > > > On Út, 2016-05-17 at 10:04 -0400, Clayton Coleman wrote:
> > > > > 
> > > > > Set image to " "
> > > > > 
> > > > > On Tue, May 17, 2016 at 9:54 AM, Tomas Nozicka  > > > > at.c
> > > > > om>
> > > > > wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > I am not able to specify ImageStream as source of what my
> > > > > > container
> > > > > > is
> > > > > > run from in my DeploymentConfig. I can only specify image
> > > > > > there
> > > > > > [1]
> > > > > > but
> > > > > > not ImageStream. But when I setup triggers for
> > > > > > DeploymentConfig
> > > > > > I
> > > > > > can
> > > > > > specify ImageStream in form of ImageStreamTag.
> > > > > > 
> > > > > > So the #1 deployment is from an image and #2->infinity are
> > > > > > done
> > > > > > from
> > > > > > ImageStreamTag.
> > > > > > 
> > > > > > Is there some way how to get deployment #1 from
> > > > > > ImageStreamTag?
> > > > > > 
> > > > > > Here is an example:
> > > > > > 
> > > > > > ---
> > > > > > "kind": "DeploymentConfig"
> > > > > > ...
> > > > > > "triggers": [
> > > > > >   {
> > > > > > "type": "ImageChange",
> > > > > >   "imageChangeParams": {
> > > > > >  "automatic": true,
> > > > > >  "containerNames": [
> > > > > >    "jenkins"
> > > > > >   ],
> > > > > >   "from": {
> > > > > > "kind": "ImageStreamTag",
> > > > > > "name": "jenkins:latest",
> > > > > > "namespace": "${NAMESPACE}"
> > > > > >   }
> > > > > >   },
> > > > > > ...
> > > > > > "template": {
> > > > > > ...
> > > > > >   "spec": {
> > > > > > "containers": [
> > > > > >   {
> > > > > > "name": "jenkins",
> > > > > > "image": "${JENKINS_IMAGE}",
> > > > > >   },
> > > > > > ...
> > > > > > ---
> > > > > > 
> > > > > > Deployment:
> > > > > >  - #1 deploys image ${JENKINS_IMAGE}
> > > > > >  - #2 (and future ones) deploys from ImageStreamTag
> > > > > > 'jenkins:latest' in
> > > > > > namespace 'openshift'
> > > > > > 
> > > > > > 
> > > > > > Thanks,
> > > > > > Tomas
> > > > > > 
> > > > > > [1] - https://docs.openshift.org/latest/rest_api/openshift_
> > > > > > v1.h
> > > > > > tml#
> > > > > > v1-c
> > > > > > ontainer
> > > > > > 
> > > > > > ___
> > > > > > dev mailing list
> > > > > > dev@lists.openshift.redhat.com
> > > > > > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: [Origin] Fail to build from remote Git repository

2016-05-19 Thread ABDALA Olga
Hi Ben,

So I deployed as asked the Jenkins image, but I got an error.
Here is the output of the CLI:

[cid:image001.png@01D1B1D2.AE114170]


De : Ben Parees [mailto:bpar...@redhat.com]
Envoyé : jeudi 19 mai 2016 13:01
À : ABDALA Olga
Cc : dev
Objet : Re: [Origin] Fail to build from remote Git repository


On May 19, 2016 5:26 AM, "ABDALA Olga" 
> wrote:
>
> Hello,
>
>
>
> I have been trying to deploy my application that already exists on Github, on 
> openShift, using the “oc new-app” command, but I have been receiving a build 
> error and I don’t know where that might be coming from.
>
>
>
> Here is what I get after running the command:
>
>
>
>
>
> But when I go through the logs, here is what I get:
>
>
>
>
>
> Does anybody know what might be the cause of the fail?

Basically the git clone is failing. Can you deploy our jenkins image to a pod, 
rsh into it, and attempt a git clone from there? That will hopefully give us 
more information about the nature of the failure.

>
>
>
> Ps: The git repo is public…
>
>
>
> Thank you!
>
>
>
> Olga A.
>
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>

Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: [Origin] Fail to build from remote Git repository

2016-05-19 Thread Ben Parees
On May 19, 2016 5:26 AM, "ABDALA Olga"  wrote:
>
> Hello,
>
>
>
> I have been trying to deploy my application that already exists on
Github, on openShift, using the “oc new-app” command, but I have been
receiving a build error and I don’t know where that might be coming from.
>
>
>
> Here is what I get after running the command:
>
>
>
>
>
> But when I go through the logs, here is what I get:
>
>
>
>
>
> Does anybody know what might be the cause of the fail?

Basically the git clone is failing. Can you deploy our jenkins image to a
pod, rsh into it, and attempt a git clone from there? That will hopefully
give us more information about the nature of the failure.

>
>
>
> Ps: The git repo is public…
>
>
>
> Thank you!
>
>
>
> Olga A.
>
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>

Ben Parees | OpenShift
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev