Re: no items in "browse catalog" page

2016-08-15 Thread Ravi Kapoor
Thanks Jessica, that was helpful. I had to do following

1. oc cluster up
it gave error
 Error: did not detect an --insecure-registry argument on the Docker daemon
   Solution:

 Ensure that the Docker daemon is running with the following argument:
--insecure-registry 172.30.0.0/16

2. Google search reveals too many places where this should be edited.
For me, I had to add it to ExecStart in file
/etc/systemd/system/docker.service

Now I do see images as shown in walkthrough
https://www.youtube.com/watch?v=yFPYGeKwmpk

3. However as I follow the tutorial, at time 6:30 (
https://youtu.be/yFPYGeKwmpk?t=390), it fails to deploy. Log shows
following error
error: cannot connect to the server: open /var/run/secrets/
kubernetes.io/serviceaccount/token: no such file or directory

Once again, I find too many suggestions to how this can be fixed. I am
afraid trying all those will take me days.
One of the pages said this has been fixed in OS 1.3, however I am running
OS 1.3

thanks so much



On Mon, Aug 15, 2016 at 4:33 PM, Jessica Forrester <jforr...@redhat.com>
wrote:

> So if you are just running a local dev cluster to try things out, I
> recommend using 'oc cluster up' instead, it will set up a lot for you,
> including all the example image streams and templates.
>
> If you want to add them in an existing cluster see step 10 here
> https://github.com/openshift/origin/blob/master/
> CONTRIBUTING.adoc#develop-on-virtual-machine-using-vagrant
>
> Where the files it is referring to are in the origin repo under examples.
>
> On Mon, Aug 15, 2016 at 7:09 PM, Ravi Kapoor <ravikapoor...@gmail.com>
> wrote:
>
>>
>> I am a newbie, so excuse my ignorance. I have tried for 2 days now to get
>> "browse catalog" page to show me catalog. I do not see any errors nor
>> images.
>> Due to this I am not able to follow any tutorials.
>> thanks for helping
>>
>>
>> This is what my page looks like (text copy in case image attachments are
>> not allowed)
>> 
>> No images or templates.
>>
>> No images or templates are loaded for this project or the shared
>> openshift namespace. An image or template is required to add content.
>>
>> To add an image stream or template from a file, use the editor in the
>> Import YAML / JSON tab, or run the following command:
>>
>> oc create -f  -n test
>> Back to overview
>> 
>>
>> Here is a screenshot
>>
>> [image: Inline image 1]
>>
>> ___
>> users mailing list
>> users@lists.openshift.redhat.com
>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>
>>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


no items in "browse catalog" page

2016-08-15 Thread Ravi Kapoor
I am a newbie, so excuse my ignorance. I have tried for 2 days now to get
"browse catalog" page to show me catalog. I do not see any errors nor
images.
Due to this I am not able to follow any tutorials.
thanks for helping


This is what my page looks like (text copy in case image attachments are
not allowed)

No images or templates.

No images or templates are loaded for this project or the shared openshift
namespace. An image or template is required to add content.

To add an image stream or template from a file, use the editor in the
Import YAML / JSON tab, or run the following command:

oc create -f  -n test
Back to overview


Here is a screenshot

[image: Inline image 1]
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: no items in "browse catalog" page

2016-08-17 Thread Ravi Kapoor
Unfortunately I am no familiar enough to follow instructions such as "giving
the API server to use the public key to verify tokens"
Also this was more than a year ago, how come out of box version is unstable
enough that it doesn't work on its own without fixes?
Are there detailed instructions on how to get it up and running?


On Mon, Aug 15, 2016 at 10:49 PM, Jonathan Yu <jaw...@redhat.com> wrote:

> Hey Ravi,
>
> On Mon, Aug 15, 2016 at 8:35 PM, Ravi Kapoor <ravikapoor...@gmail.com>
> wrote:
>
>> Thanks Jessica, that was helpful. I had to do following
>>
>> 1. oc cluster up
>> it gave error
>>  Error: did not detect an --insecure-registry argument on the Docker
>> daemon
>>Solution:
>>
>>  Ensure that the Docker daemon is running with the following argument:
>> --insecure-registry 172.30.0.0/16
>>
>
> You will need to modify your /etc/sysconfig/docker file (on Fedora and I
> think also CentOS and RHEL) to add this flag.  Other OSes will have these
> flags potentially stored elsewhere, like /etc/default on Debian and Ubuntu.
>
>>
>> 2. Google search reveals too many places where this should be edited.
>>
>
> Yes, unfortunately different distros do things differently as there's no
> "standard" approach. I would suggest adding your distro name to your Google
> searches to get the most relevant results.
>
>
>> For me, I had to add it to ExecStart in file
>> /etc/systemd/system/docker.service
>>
>> Now I do see images as shown in walkthrough https://www.youtub
>> e.com/watch?v=yFPYGeKwmpk
>>
>> 3. However as I follow the tutorial, at time 6:30 (
>> https://youtu.be/yFPYGeKwmpk?t=390), it fails to deploy. Log shows
>> following error
>> error: cannot connect to the server: open /var/run/secrets/kubernetes.io
>> /serviceaccount/token: no such file or directory
>>
>
> Hmm, this looks interesting.  Unfortunately I'm not an expert so I'm not
> sure where to look for this.  A quick search for the error message yields
> this bug: https://github.com/kubernetes/kubernetes/issues/10620 - does
> anything in there help?
>
>>
>> Once again, I find too many suggestions to how this can be fixed. I am
>> afraid trying all those will take me days.
>> One of the pages said this has been fixed in OS 1.3, however I am running
>> OS 1.3
>>
>> thanks so much
>>
>>
>>
>> On Mon, Aug 15, 2016 at 4:33 PM, Jessica Forrester <jforr...@redhat.com>
>> wrote:
>>
>>> So if you are just running a local dev cluster to try things out, I
>>> recommend using 'oc cluster up' instead, it will set up a lot for you,
>>> including all the example image streams and templates.
>>>
>>> If you want to add them in an existing cluster see step 10 here
>>> https://github.com/openshift/origin/blob/master/CONTRIB
>>> UTING.adoc#develop-on-virtual-machine-using-vagrant
>>>
>>> Where the files it is referring to are in the origin repo under examples.
>>>
>>> On Mon, Aug 15, 2016 at 7:09 PM, Ravi Kapoor <ravikapoor...@gmail.com>
>>> wrote:
>>>
>>>>
>>>> I am a newbie, so excuse my ignorance. I have tried for 2 days now to
>>>> get "browse catalog" page to show me catalog. I do not see any errors nor
>>>> images.
>>>> Due to this I am not able to follow any tutorials.
>>>> thanks for helping
>>>>
>>>>
>>>> This is what my page looks like (text copy in case image attachments
>>>> are not allowed)
>>>> 
>>>> No images or templates.
>>>>
>>>> No images or templates are loaded for this project or the shared
>>>> openshift namespace. An image or template is required to add content.
>>>>
>>>> To add an image stream or template from a file, use the editor in the
>>>> Import YAML / JSON tab, or run the following command:
>>>>
>>>> oc create -f  -n test
>>>> Back to overview
>>>> 
>>>>
>>>> Here is a screenshot
>>>>
>>>> [image: Inline image 1]
>>>>
>>>> ___
>>>> users mailing list
>>>> users@lists.openshift.redhat.com
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>
>>>>
>>>
>>
>> ___
>> u

Re: few basic questions about S2I and docker run

2016-09-12 Thread Ravi Kapoor
Hi Ben,

I am finally able to run my nodejs code on openshift with both approaches
(volume mount as well as S2I)
I was also able to resolve most of other issues I mentioned and was able to
run JEE application as well.

Thanks a lot for helping me through all the silly questions.
Good news is that now my company will be using openshift to manage our
dockers/deployments.

regards


On Sat, Sep 10, 2016 at 8:23 AM, Ben Parees <bpar...@redhat.com> wrote:

> you can define a command on the container within the pod:
> http://kubernetes.io/docs/user-guide/configuring-containers/#launching-a-
> container-using-a-configuration-file
>
>
> On Fri, Sep 9, 2016 at 5:21 PM, Ravi <ravikapoor...@gmail.com> wrote:
>
>>
>> Thank you for this help.
>>
>> I was trying nginx because after invoking container, I do not have to run
>> a command. For java or node, after the container is run I will need to run
>> a command e.g.
>>
>> java -jar myapp.jar
>> OR
>> node server.js
>>
>> Can you guide me how to add this to the json file or point me to
>> documentation so I can try this?
>>
>> thanks so much
>>
>>
>> On 9/8/2016 6:56 PM, Ben Parees wrote:
>>
>>> Downloads$ oc get pods
>>> NAME READY STATUSRESTARTS   AGE
>>> nginx-1-deploy   1/1   Running   0  14s
>>> nginx-1-rmfl90/1   Error 0  11s
>>>
>>> Downloads$ oc logs nginx-1-rmfl9
>>> 2016/09/09 01:54:21 [warn] 1#1: the "user" directive makes sense only if
>>> the master process runs with super-user privileges, ignored in
>>> /etc/nginx/nginx.conf:2
>>> nginx: [warn] the "user" directive makes sense only if the master
>>> process runs with super-user privileges, ignored in
>>> /etc/nginx/nginx.conf:2
>>> 2016/09/09 01:54:21 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp"
>>> failed (13: Permission denied)
>>> nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13:
>>> Permission denied)
>>>
>>>
>>> the nginx image probably only works when run as root or as some other
>>> specific user.  when images are run in openshift, by default they are
>>> assigned a random uid for security purposes.  that can cause issues with
>>> images that expect to run as a specific user.  please see our
>>> documentation:
>>>
>>> https://docs.openshift.org/latest/creating_images/guidelines
>>> .html#openshift-origin-specific-guidelines
>>> (section on support arbitrary uids)
>>>
>>> to relax the restriction, see:
>>> https://docs.openshift.org/latest/admin_guide/manage_scc.htm
>>> l#enable-images-to-run-with-user-in-the-dockerfile
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Sep 8, 2016 at 9:50 PM, Ravi <ravikapoor...@gmail.com
>>> <mailto:ravikapoor...@gmail.com>> wrote:
>>>
>>>
>>> oh, forgot to add, I do not have any readiness probe.
>>>
>>> On 9/8/2016 6:47 PM, Ravi Kapoor wrote:
>>>
>>> I removed volumes, pod still failed. json and logs attached
>>>
>>>
>>>
>>> On Thu, Sep 8, 2016 at 6:35 PM, Ben Parees <bpar...@redhat.com
>>> <mailto:bpar...@redhat.com>
>>> <mailto:bpar...@redhat.com <mailto:bpar...@redhat.com>>> wrote:
>>>
>>> though i don't see it in your json it sounds like you have a
>>> readiness probe defined on your pod and it's not being met
>>> successfully.
>>>
>>> the other possibility is it has to do w/ your mounts.  can
>>> you
>>> temporarily remove the volume mounts and see if the pod
>>> comes up?
>>>
>>>
>>> On Thu, Sep 8, 2016 at 8:33 PM, Ravi Kapoor
>>> <ravikapoor...@gmail.com <mailto:ravikapoor...@gmail.com>
>>> <mailto:ravikapoor...@gmail.com
>>> <mailto:ravikapoor...@gmail.com>>> wrote:
>>>
>>> Pod deployment failed. error in console log is
>>>
>>> --> Scaling nginx-1 to 1
>>> --> Waiting up to 10m0s for pods in deployment nginx-1
>>> to become
>>> ready
>>> error: update acceptor rejected nginx-1: pods for
>>> deployment
>>> "nginx-1" took longer t

Re: Pod does not have Scale up/down buttons

2016-09-19 Thread Ravi Kapoor
Once more, now with JSON

{
"kind": "List",
"apiVersion": "v1beta3",
"metadata": {},
"items": [
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"labels": {
"name": "node-test"
},
"name": "node-test"
},
"spec": {
"containers": [
{
"image": "node:4.4.7",
"imagePullPolicy": "IfNotPresent",
"name": "node-test",
"command": [
"node"
],
"args": [
"/usr/src/app/server.js"
],
"ports": [
{
"containerPort": 8080,
"protocol": "TCP"
}
],
"volumeMounts": [
{
"mountPath": "/usr/src/app",
"name": "myclaim2"
}
],
"securityContext": {
"capabilities": {},
"privileged": false
},
"terminationMessagePath": "/dev/termination-log"
}
],
"volumes": [
{
"name": "myclaim2",
"persistentVolumeClaim": {
"claimName": "myclaim2"
}
}
],
"dnsPolicy": "ClusterFirst",
"restartPolicy": "Always",
"serviceAccount": ""
},
"status": {}
},
{
"apiVersion": "v1",
"kind": "Service",
"metadata": {
"creationTimestamp": null,
"name": "node-service"
},
"spec": {
"portalIP": "",
"ports": [
{
"name": "web",
"port": 8080,
        "protocol": "TCP"
}
],
"selector": {
"name": "node-test"
},
"sessionAffinity": "None",
"type": "ClusterIP"
},
"status": {
"loadBalancer": {}
}
},
{
"apiVersion": "v1",
"kind": "Route",
"metadata": {
"annotations": {},
"name": "node-route"
},
"spec": {
"to": {
"name": "node-service"
}
}
}
]
}

On Mon, Sep 19, 2016 at 2:19 PM, Ravi Kapoor <ravikapoor...@gmail.com>
wrote:

>
> I created following job definition. It successfully creates a service, pod
> and a route. I am able to access the website.
>
> It shows 1 Pod running, however, there are no scale up/down buttons in the
> UI.
> How can I scale this application up?
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: no items in "browse catalog" page

2016-08-17 Thread Ravi Kapoor
Thank you so much Cesar. Looks like that is exactly what I was facing.
As commented in the defect log, I tried oc cluster up
--version=v1.3.0-alpha.2
and that also works.


P.S.: if any developers are here, I would suggest switch openshift docker
distribution to start "oc cluster up" instead of "openshift start".
I was unable to get docker to show default images (which are setup by oc
cluster up, as jessica mentioned) and was also unable to find a way to do
"oc cluster up" after docker started. i got error "openshift is already
up". i tried "oc cluster down" which killed docker. i also tried building
my own dockerfile with different entry point but for some reason that also
did not work.



On Wed, Aug 17, 2016 at 1:03 PM, Cesar Wong <cew...@redhat.com> wrote:

> Hi Ravi,
>
> What distro are you using?
>
> You may be hitting this issue: https://github.com/
> openshift/origin/issues/10215
>
> A simple workaround is to use an earlier version of the OpenShift images:
> Stop your current cluster with 'oc cluster down'. Then bring it back up
> with 'oc cluster up --version=v1.2.0'
>
>
> On Wed, Aug 17, 2016 at 2:54 PM, Ravi Kapoor <ravikapoor...@gmail.com>
> wrote:
>
> Unfortunately I am no familiar enough to follow instructions such as "giving
> the API server to use the public key to verify tokens"
> Also this was more than a year ago, how come out of box version is
> unstable enough that it doesn't work on its own without fixes?
> Are there detailed instructions on how to get it up and running?
>
>
> On Mon, Aug 15, 2016 at 10:49 PM, Jonathan Yu <jaw...@redhat.com> wrote:
>
>> Hey Ravi,
>>
>> On Mon, Aug 15, 2016 at 8:35 PM, Ravi Kapoor <ravikapoor...@gmail.com>
>> wrote:
>>
>>> Thanks Jessica, that was helpful. I had to do following
>>>
>>> 1. oc cluster up
>>> it gave error
>>> Error: did not detect an --insecure-registry argument on the Docker
>>> daemon
>>> Solution:
>>>
>>> Ensure that the Docker daemon is running with the following argument:
>>> --insecure-registry 172.30.0.0/16
>>>
>>
>> You will need to modify your /etc/sysconfig/docker file (on Fedora and I
>> think also CentOS and RHEL) to add this flag. Other OSes will have these
>> flags potentially stored elsewhere, like /etc/default on Debian and Ubuntu.
>>
>>>
>>> 2. Google search reveals too many places where this should be edited.
>>>
>>
>> Yes, unfortunately different distros do things differently as there's no
>> "standard" approach. I would suggest adding your distro name to your Google
>> searches to get the most relevant results.
>>
>> For me, I had to add it to ExecStart in file /etc/systemd/system/docker.
>>> service
>>>
>>> Now I do see images as shown in walkthrough
>>> https://www.youtube.com/watch?v=yFPYGeKwmpk
>>>
>>> 3. However as I follow the tutorial, at time 6:30 (
>>> https://youtu.be/yFPYGeKwmpk?t=390), it fails to deploy. Log shows
>>> following error
>>> error: cannot connect to the server: open /var/run/secrets/kubernetes.
>>> io/serviceaccount/token: no such file or directory
>>>
>>
>> Hmm, this looks interesting. Unfortunately I'm not an expert so I'm not
>> sure where to look for this. A quick search for the error message yields
>> this bug: https://github.com/kubernetes/kubernetes/issues/10620 - does
>> anything in there help?
>>
>>>
>>> Once again, I find too many suggestions to how this can be fixed. I am
>>> afraid trying all those will take me days.
>>> One of the pages said this has been fixed in OS 1.3, however I am
>>> running OS 1.3
>>>
>>> thanks so much
>>>
>>>
>>>
>>> On Mon, Aug 15, 2016 at 4:33 PM, Jessica Forrester <jforr...@redhat.com>
>>> wrote:
>>>
>>>> So if you are just running a local dev cluster to try things out, I
>>>> recommend using 'oc cluster up' instead, it will set up a lot for you,
>>>> including all the example image streams and templates.
>>>>
>>>> If you want to add them in an existing cluster see step 10 here
>>>> https://github.com/openshift/origin/blob/master/
>>>> CONTRIBUTING.adoc#develop-on-virtual-machine-using-vagrant
>>>>
>>>> Where the files it is referring to are in the origin repo under
>>>> examples.
>>>>
>>>> On Mon, Aug 15, 2016 at 7:09 PM, Ravi Kapoor <ravikapoor...@gmail.com>
>&

Re: few basic questions about S2I and docker run

2016-08-26 Thread Ravi Kapoor
Ben,

Thank you so much for taking the time to explain. This is very helpful.
If I may, I have a few followup questions:

> ​That is not a great approach to running code.  It's fine for
development, but you really want to be producing immutable images that a
developer can hand to QE has tested it, they can hand that exact same image
to prod, and there's no risk that pieces have changed.

Q1: It seems like Lyft uses the approach I was mentioning i.e. inject code
into dockers rather than copy code inside dockers (ref:
https://youtu.be/iC2T3gJsB0g?t=595). In this approach there are only two
elements - the image (which will not change) and the code build/tag which
will also not change. So what else can change?

> running things in that way means you need to get both the image and your
class files into paths on any machine where the image is going to be run,
and then specify that mount path correctly

Q2: I would think that openshift has a mechanism to pull files from git to
a temp folder and way to volume mount that temp folder into any container
it runs.Volume mounts are very basic feature of dockers and I am hoping
they are somehow workable with openshift. Are they not? Don't we need them
for lets say database dockers? Lets say a mongodb container is running, it
is writing data to a volume mounted disk. If container crashes, is
openshift able to start a new container with previous saved data?


Q3: Even if you disagree, I would still like to know (if nothing else then
for learning/education) about how to run external images with volume mounts
and other parameters being passed into the image. I am having very hard
time finding this.

regards
Ravi


On Fri, Aug 26, 2016 at 10:29 AM, Ben Parees  wrote:

>
>
> On Fri, Aug 26, 2016 at 1:07 PM, Ravi  wrote:
>
>>
>> So I am trying to use openshift to manage our dockers.
>>
>> First problem I am facing is that most of documentation and image
>> templates seem to be about S2I. We are
>
>
> ​When it comes to building images, openshift supports basically 4
> approaches, in descending order of recommendation and increasing order of
> flexibility:
>
> 1) s2i (you supply source and pick a builder image, we build a new
> application image and push it somewhere)
> 2) docker-type builds (you supply the dockerfile and content, we run
> docker build for you and push the image somewhere)
> 3) custom​ (you supply an image, we'll run that image, it can do whatever
> it wants to "build" something and push it somewhere, whether that something
> is an image, jar file, etc)
> 4) build your images externally on your own infrastructure and just use
> openshift to run them.
>
> The first (3) of those are discussed here:
> https://docs.openshift.org/latest/architecture/core_
> concepts/builds_and_image_streams.html#builds
> ​
>
>
>> considering a continuous builds for multiple projects and building an
>> image every 1 hour for multiple projects would create total 20GB images
>> every day.
>>
>
> I'm not sure how this statement relates to s2i.  Do yo have a specific
> concern about s2i with respect to creating these images?  Openshift does
> offer image pruning to help deal with the number of images you sound like
> you'll be creating, if you're interested in that.
>
>
>
>>
>> Q1: Is this right way of thinking? Since today most companies are doing
>> CI, this should be a common problem. Why is S2I considered impressive
>> feature?
>>
>
> ​S2I really has little to do with CI/CD.  S2I is one way to produce docker
> images, there are others as I listed above.  Your CI flow is going to be
> something like:
>
> 1) change source
> 2) build that source into an image (in whatever way you want, s2i is one
> mechanism)
> 3) test the new image
> 4) push the new image into production
>
> ​The advantages to using s2i are not about how it specifically works well
> with CI, but rather with the advantages it offers around building images in
> a quick, secure, convenient way, as described here:
>
> https://docs.openshift.org/latest/architecture/core_
> concepts/builds_and_image_streams.html#source-build
>
>
>
>
>>
>> So, I am trying to use off the shelf images and inject code/conf into
>> them. I know how to do this from docker command line (example: docker run
>> --rm -it -v /my/host/folder:/usr/src/myapp -w /usr/src/myapp
>> openjdk:8-jre-alpine java myClass )
>>
>
> ​That is not a great approach to running code.  It's fine for development,
> but you really want to be producing immutable images that a developer can
> hand to QE has tested it, they can hand that exact same image to prod, and
> there's no risk that pieces have changed.
>
> Also running things in that way means you need to get both the image and
> your class files into paths on any machine where the image is going to be
> run, and then specify that mount path correctly.  It's not a scalable
> model.  You want to build runnable images, not images that need the
> application side-loaded via a 

how can I force rolling deployment without any change in deployment config

2016-09-29 Thread Ravi Kapoor
I created a deployment config and it is working fine.
However I made a few changes in a file mounted using persistent volume.
Somehow the changed file is not being loaded and old file is being used.

I found that to load the modified file, I need to do a new deployment.
I also found that unless I change somethign in my deploymentconfig.yaml,
the rollign deployment is not triggered.
So my temporary solution was to change the file name on disk and reflect
the same in deplymentconfig and trigger use "oc replace -f
deploymentConfig.xml".

While this solves the problem, renaming the file on every deployment is way
too risky

I tried inserting a pseudo element in YAML file as "buildnumber: 1",
however changing this parameter does trigger the deployment either.

Is there a way for me to force reload the file from persistent volume or
force deployment without renaming the file?
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


error during install: subnet id does not exist

2016-11-17 Thread Ravi Kapoor
I am trying to install openshift using instructions at
https://github.com/openshift/openshift-ansible
Question1: Is this best way to install? So far I have been using "oc
cluster up" while it works it crashes once in a while (at least UI crashes,
so I am forced to restart it which kills all pods)


Question2:
After I did all the configurations, my install still fails with following
error:

exception occurred during task execution. To see the full traceback, use
-vvv. The error was:
InvalidSubnetID.NotFoundThe
subnet ID 'subnet-c7372dfd' does not
exist2b4d4256-7204-4ced-9af3-318d86a759f0


The subnet id is correct, here is a screenshot.
[image: Inline image 1]

Thanks for any help.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: error during install: subnet id does not exist

2016-11-17 Thread Ravi Kapoor
> Are you using openshift-ansible's AWS support to create EC2 instances for
you? We create our instances by other means and then run openshift-ansible
on them using the BYO playbooks,
I am not opposed to it, just that I am a beginner, trying to get something
up and running. I can create instances manually and run ansible on it, but
not able to find instructions.
Openshift's "advanced install" instructions are way too advanced.

I have a single node openshift working, but to add a node, instructions
only point to ansible (oadm does not have a command). So I am thinking
fastest way to a working cluster (with add node possibilities) is to use
ansible, hence this path.

> Do you have the availability zone or VPC set in your inventory file?  If
so, does it match the subnet you specified?
The instructions do not ask for availability zone or VPC. They only ask for
a subnet and I have specified that.
Maybe it is picking some other VPC where the subnet is not available.





On Thu, Nov 17, 2016 at 11:19 AM, Alex Wauck <alexwa...@exosite.com> wrote:

>
>
> On Thu, Nov 17, 2016 at 12:09 PM, Ravi Kapoor <ravikapoor...@gmail.com>
> wrote:
>
> Question1: Is this best way to install? So far I have been using "oc
>> cluster up" while it works it crashes once in a while (at least UI crashes,
>> so I am forced to restart it which kills all pods)
>>
>
> We used openshift-ansible to install our OpenShift cluster, and we fairly
> regularly use it to create temporary clusters for testing purposes.  I
> would consider it the best way to install.
>
>
>> Question2:
>> After I did all the configurations, my install still fails with following
>> error:
>>
>> exception occurred during task execution. To see the full traceback, use
>> -vvv. The error was: > >InvalidSubnetID.NotFoundThe subnet ID 'subnet-c7372dfd'
>> does not exist2b4d4256-7204-
>> 4ced-9af3-318d86a759f0
>>
>
> Are you using openshift-ansible's AWS support to create EC2 instances for
> you?  We create our instances by other means and then run openshift-ansible
> on them using the BYO playbooks, so I'm not familiar with
> openshift-ansible's AWS support.  Do you have the availability zone or VPC
> set in your inventory file?  If so, does it match the subnet you specified?
>
>
>
> --
>
> Alex Wauck // DevOps Engineer
>
> *E X O S I T E*
> *www.exosite.com <http://www.exosite.com/>*
>
> Making Machines More Human.
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users