Re: 'oc cluster up' command

2016-05-18 Thread Cesar Wong

Hi Aleksandar,
You can specify your default router suffix as an argument 
(--routing-suffix). Or you can bring up your cluster, modify the 
configuration however you like, shut it down and bring it up again with the 
--use-existing-config flag. Details are in the readme doc 
[https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md] .


On Wed, May 18, 2016 at 8:12 AM, Aleksandar Kostadinov 
<akost...@redhat.com> wrote:

This is an amazingly nice feature. Two questions:

* How can one change master configuration (e.g. default router subdomain)?
* How does one clean-up things after down (i.e. delete generated
certificates, docker images, etc.)?

Cesar Wong wrote on 05/18/2016 03:48 AM:
> PR #7675 <https://github.com/openshift/origin/pull/7675> introduces a
> new command 'oc cluster up' that allows you to start an OpenShift
> all-in-one cluster with a configured registry, router and an initial set
> of templates and image streams. The 'oc cluster down' command will stop
> the cluster.
>
> The command can be run from any client platform we support (Windows, OS
> X, Linux). All it requires is a valid Docker connection.
>
> At it's most basic, ensure Docker commands work, like 'docker ps',
> download the 'oc' binary
> <https://github.com/openshift/origin/releases/latest> for your platform,
> and run:
>
> $ oc cluster up
>
> To stop, run
>
> $ oc cluster down
>
> You can read more about other options and usage in specific
> situations here
> 
<https://github.com/csrwng/origin/blob/1a19ebdba32376766f80d079a80782ca59a1fd55/docs/cluster_up_down.md>.

>
>
>
>
>
> ___
> dev mailing list
> dev@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
>___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


Re: Question on DNS when running Master/Node on same node

2017-03-23 Thread Cesar Wong
Your container network needs to have access to the master API and DNS ports. 
Instructions to allow that are in step #3 here:
https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md#linux 



> On Mar 23, 2017, at 2:31 PM, Rishi Misra  wrote:
> 
> I spoke to my network admin and when he stopped and disabled iptables then I 
> can get it to work.  I am not sure if this is a hack or there is a way around 
> it.  After iptables are disabled here is the response:
> 
> //
> # curl -k https://172.30.0.1 
> {
>   "paths": [
> "/api",
> "/api/v1",
> "/apis",
> "/controllers",
> "/healthz",
> "/healthz/ping",
> "/healthz/ready",
> "/metrics",
> "/oapi",
> "/oapi/v1",
> "/swaggerapi/"
>   ]
> 
> //
> 
> On Thu, Mar 23, 2017 at 2:22 PM, Clayton Coleman  > wrote:
> One more test (sorry) - inside a debug container, can you
> 
> $ curl -k https://172.30.0.1 
> 
> It should return a list of json responses.  If it can't, it either means:
> 
> MASTER_IP is not being correctly registered into your endpoints ("oc get 
> endpoints -n default" should include master ip) or a firewall or other 
> network connection is blocking access from containers on regular tcp/udp, but 
> not icmp
> 
> If it can, then another service on your host is likely blocking dns on the 
> container bridge network (which I've never seen be anything other than some 
> form of firewall).
> 
> On Mar 23, 2017, at 1:09 PM, Rishi Misra  > wrote:
> 
>> Here are the results:
>> 
>> //
>> # oc get pods
>> NAME   READY STATUSRESTARTS   AGE
>> vote-1-7acnx   1/1   Running   0  38s
>> # oc debug pod/vote-1-7acnx
>> Debugging with pod/vote-1-7acnx-debug, original command: gunicorn app:app -b 
>> 0.0.0.0:80  --log-file - --access-logfile - --workers 4 
>> --keep-alive 0
>> Waiting for pod to start ...
>> Pod IP: 172.17.0.2
>> If you don't see a command prompt, try pressing enter.
>> 
>> root@vote-1-7acnx-debug:/app# dig @MASTER_IP -p 53 
>> kubernetes.default.svc.cluster.local
>> 
>> ; <<>> DiG 9.9.5-9+deb8u10-Debian <<>> @MASTER_IP -p 53 
>> kubernetes.default.svc.cluster.local
>> ; (1 server found)
>> ;; global options: +cmd
>> ;; connection timed out; no servers could be reached
>> root@vote-1-7acnx-debug:/app# dig @MASTER_IP -p 53 www.google.com 
>> 
>> 
>> ; <<>> DiG 9.9.5-9+deb8u10-Debian <<>> @MASTER_IP -p 53 www.google.com 
>> 
>> ; (1 server found)
>> ;; global options: +cmd
>> ;; connection timed out; no servers could be reached
>> root@vote-1-7acnx-debug:/app#
>> 
>> //
>> 
>> On Thu, Mar 23, 2017 at 12:49 PM, Clayton Coleman > > wrote:
>> Ok, can you create a running container (oc debug pod/NAME_OF_POD) and inside 
>> of it run the same dig commands (you'll need a docker image with dig already 
>> installed)
>> 
>> On Thu, Mar 23, 2017 at 12:46 PM, Rishi Misra > > wrote:
>> It seems to:
>> 
>> /=/
>> # dig @MASTER_IP -p 53 kubernetes.default.svc.cluster.local
>> 
>> ; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.2 <<>> @MASTER_IP -p 53 
>> kubernetes.default.svc.cluster.local
>> ; (1 server found)
>> ;; global options: +cmd
>> ;; Got answer:
>> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34034
>> ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
>> 
>> ;; QUESTION SECTION:
>> ;kubernetes.default.svc.cluster.local. IN A
>> 
>> ;; ANSWER SECTION:
>> kubernetes.default.svc.cluster.local. 30 IN A   172.30.0.1
>> 
>> ;; Query time: 0 msec
>> ;; SERVER: MASTER_IP#53(MASTER_IP)
>> ;; WHEN: Thu Mar 23 12:41:04 EDT 2017
>> ;; MSG SIZE  rcvd: 70
>> 
>> # dig @MASTER_IP -p 53 www.google.com 
>> 
>> ; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.2 <<>> @MASTER_IP -p 53 
>> www.google.com 
>> ; (1 server found)
>> ;; global options: +cmd
>> ;; Got answer:
>> ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28549
>> ;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 4, ADDITIONAL: 1
>> 
>> ;; OPT PSEUDOSECTION:
>> ; EDNS: version: 0, flags:; udp: 4096
>> ;; QUESTION SECTION:
>> ;www.google.com .IN  A
>> 
>> ;; ANSWER SECTION:
>> www.google.com . 6   IN  A   
>> 74.125.21.147
>> www.google.com . 6   IN  A   
>> 74.125.21.99
>> www.google.com . 6

Re: Seeking Advice On Exporting Image Streams

2017-08-09 Thread Cesar Wong
Hi Devan,

You can see my branch here:
https://github.com/csrwng/origin/tree/parameterize_template 
<https://github.com/csrwng/origin/tree/parameterize_template>
(last 5 commits)

Hopefully should be a PR soon. The REST endpoint should be functional, the CLI 
still needs work, but basically the idea is to have the reverse of the ‘oc 
process’ command, where the input is a list of resources and out comes a 
template with parameters.

> On Aug 9, 2017, at 11:40 AM, Devan Goodwin <dgood...@redhat.com> wrote:
> 
> On Wed, Aug 9, 2017 at 11:44 AM, Cesar Wong <cew...@redhat.com> wrote:
>> Hi Devan,
>> 
>> This past iteration I started work on this same problem [1]
>> 
>> https://trello.com/c/I2ZJxS94/998-5-improve-oc-export-to-parameterize-containerapppromotion
>> 
>> The problem is broad and the way I decided to break it up is to consider the
>> export and parameterize operations independently. The export should be
>> handled by the resource’s strategy as you mentioned in the Kube issue you
>> opened. The parameterization part can be a follow up to the export. Here’s
>> an initial document describing it:
>> 
>> https://docs.google.com/a/redhat.com/document/d/15SLkhXRovY1dLbxpWFy_Wfq3I6xMznsOAnopTYrXw_A/edit?usp=sharing
> 
> Thanks that was a good read, will keep an eye on this document.
> 
> Does anything exist yet for your parameterization code? Curious what
> it looks like and if it's something we could re-use yet, what the
> inputs and outputs are, etc.
> 
>> 
>> On the export side, I think we need to decide whether there is different
>> “types” of export that can happen which should affect the logic of the
>> resource strategy. For example, does a deployment config look different if
>> you’re exporting it for use in a different namespace vs a different cluster.
>> If this is the case, then right now is probably a good time to drive that
>> change to the upstream API as David suggested.
> 
> Is anyone working on a proposal for this export logic upstream? I am
> wondering if I should try to put one together if I can find the time.
> The general idea (as I understand it) would be to migrate the
> currently quite broken export=true param to something strategy based,
> and interpret "true" to mean a strategy that matches what we do today.
> The references in code I've seen indicate that the current intention
> is to strip anything the user cannot specify themselves.
> 
> 
> 
>> 
>> On Aug 9, 2017, at 10:27 AM, Ben Parees <bpar...@redhat.com> wrote:
>> 
>> 
>> 
>> On Wed, Aug 9, 2017 at 10:00 AM, Devan Goodwin <dgood...@redhat.com> wrote:
>>> 
>>> On Wed, Aug 9, 2017 at 9:58 AM, Ben Parees <bpar...@redhat.com> wrote:
>>>> 
>>>> 
>>>> On Wed, Aug 9, 2017 at 8:49 AM, Devan Goodwin <dgood...@redhat.com>
>>>> wrote:
>>>>> 
>>>>> We are working on a more robust project export/import process (into a
>>>>> new namespace, possibly a new cluster, etc) and have a question on how
>>>>> to handle image streams.
>>>>> 
>>>>> Our first test was with "oc new-app
>>>>> https://github.com/openshift/ruby-hello-world.git;, this results in an
>>>>> image stream like the following:
>>>>> 
>>>>> $ oc get is ruby-hello-world -o yaml
>>>>> apiVersion: v1
>>>>> kind: ImageStream
>>>>> metadata:
>>>>>  annotations:
>>>>>openshift.io/generated-by: OpenShiftNewApp
>>>>>  creationTimestamp: 2017-08-08T12:01:22Z
>>>>>  generation: 1
>>>>>  labels:
>>>>>app: ruby-hello-world
>>>>>  name: ruby-hello-world
>>>>>  namespace: project1
>>>>>  resourceVersion: "183991"
>>>>>  selfLink: /oapi/v1/namespaces/project1/imagestreams/ruby-hello-world
>>>>>  uid: 4bd229be-7c31-11e7-badf-989096de63cb
>>>>> spec:
>>>>>  lookupPolicy:
>>>>>local: false
>>>>> status:
>>>>>  dockerImageRepository: 172.30.1.1:5000/project1/ruby-hello-world
>>>>>  tags:
>>>>>  - items:
>>>>>- created: 2017-08-08T12:02:04Z
>>>>>  dockerImageReference:
>>>>> 
>>>>> 
>>>>> 172.30.1.1:5000/project1/ruby-hello-world@sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
>>>>>  generation: 1
>>>>>  image:
>>>>> sha

Re: Seeking Advice On Exporting Image Streams

2017-08-09 Thread Cesar Wong
Hi Devan,

This past iteration I started work on this same problem [1]

https://trello.com/c/I2ZJxS94/998-5-improve-oc-export-to-parameterize-containerapppromotion
 


The problem is broad and the way I decided to break it up is to consider the 
export and parameterize operations independently. The export should be handled 
by the resource’s strategy as you mentioned in the Kube issue you opened. The 
parameterization part can be a follow up to the export. Here’s an initial 
document describing it:

https://docs.google.com/a/redhat.com/document/d/15SLkhXRovY1dLbxpWFy_Wfq3I6xMznsOAnopTYrXw_A/edit?usp=sharing
 


On the export side, I think we need to decide whether there is different 
“types” of export that can happen which should affect the logic of the resource 
strategy. For example, does a deployment config look different if you’re 
exporting it for use in a different namespace vs a different cluster. If this 
is the case, then right now is probably a good time to drive that change to the 
upstream API as David suggested.

> On Aug 9, 2017, at 10:27 AM, Ben Parees  wrote:
> 
> 
> 
> On Wed, Aug 9, 2017 at 10:00 AM, Devan Goodwin  > wrote:
> On Wed, Aug 9, 2017 at 9:58 AM, Ben Parees  > wrote:
> >
> >
> > On Wed, Aug 9, 2017 at 8:49 AM, Devan Goodwin  > > wrote:
> >>
> >> We are working on a more robust project export/import process (into a
> >> new namespace, possibly a new cluster, etc) and have a question on how
> >> to handle image streams.
> >>
> >> Our first test was with "oc new-app
> >> https://github.com/openshift/ruby-hello-world.git 
> >> ", this results in an
> >> image stream like the following:
> >>
> >> $ oc get is ruby-hello-world -o yaml
> >> apiVersion: v1
> >> kind: ImageStream
> >> metadata:
> >>   annotations:
> >> openshift.io/generated-by : 
> >> OpenShiftNewApp
> >>   creationTimestamp: 2017-08-08T12:01:22Z
> >>   generation: 1
> >>   labels:
> >> app: ruby-hello-world
> >>   name: ruby-hello-world
> >>   namespace: project1
> >>   resourceVersion: "183991"
> >>   selfLink: /oapi/v1/namespaces/project1/imagestreams/ruby-hello-world
> >>   uid: 4bd229be-7c31-11e7-badf-989096de63cb
> >> spec:
> >>   lookupPolicy:
> >> local: false
> >> status:
> >>   dockerImageRepository: 172.30.1.1:5000/project1/ruby-hello-world 
> >> 
> >>   tags:
> >>   - items:
> >> - created: 2017-08-08T12:02:04Z
> >>   dockerImageReference:
> >>
> >> 172.30.1.1:5000/project1/ruby-hello-world@sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
> >>  
> >> 
> >>   generation: 1
> >>   image:
> >> sha256:8d0f81a13ec1b8f8fa4372d26075f0dd87578fba2ec120776133db71ce2c2074
> >> tag: latest
> >>
> >>
> >> If we link up with the kubernetes resource exporting by adding --export:
> >>
> >> $ oc get is ruby-hello-world -o yaml --export
> >> apiVersion: v1
> >> kind: ImageStream
> >> metadata:
> >>   annotations:
> >> openshift.io/generated-by : 
> >> OpenShiftNewApp
> >>   creationTimestamp: null
> >>   generation: 1
> >>   labels:
> >> app: ruby-hello-world
> >>   name: ruby-hello-world
> >>   namespace: default
> >>   selfLink: /oapi/v1/namespaces/default/imagestreams/ruby-hello-world
> >> spec:
> >>   lookupPolicy:
> >> local: false
> >> status:
> >>   dockerImageRepository: 172.30.1.1:5000/default/ruby-hello-world 
> >> 
> >>
> >>
> >> This leads to an initial question, what stripped the status tags? I
> >> would have expected this code to live in the image stream strategy:
> >>
> >> https://github.com/openshift/origin/blob/master/pkg/image/registry/imagestream/strategy.go
> >>  
> >> 
> >> but this does not satisfy RESTExportStrategy, I wasn't able to
> >> determine where this is happening.
> >>
> >> The dockerImageRepository in status remains, but weirdly flips from
> >> "project1" to "default" when doing an export. Should this remain in an
> >> exported IS at all? And if so is there any reason why it would flip
> >> from project1 to default?
> >>
> >> Our real problem however picks up in the deployment config after
> >> import, in here we end up with the following (partial) DC:
> >>
> >> apiVersion: v1
> >> kind: DeploymentConfig
> >> metadata:
> >>