Re: OpenShift long term plan regarding Routes and Ingresses

2018-03-02 Thread Clayton Coleman
We have no plan to deprecate routes.  Since ingress are still beta and
there is no clear replacement proposal (and are less expressive) we
plan to continue to offer routes for a long time.  There’s some work
in 3.10 to convert ingress to routes automatically to simplify
transition, which will support the core object and maybe a few
annotations.

> On Mar 2, 2018, at 2:55 AM, Mickaël Canévet  wrote:
>
> both. »

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Build pod already exists

2018-03-02 Thread Lionel Orellana
For anyone that comes across this issue here, it is likely it was related
to ntp. There is a bugzilla for it:
https://bugzilla.redhat.com/show_bug.cgi?id=1547551

On 29 January 2018 at 06:12, Ben Parees  wrote:

>
>
> On Sun, Jan 28, 2018 at 6:05 AM, Lionel Orellana 
> wrote:
>
>> Thanks Ben.
>>
>> I can reproduce it with the nodejs and wildfly bjild configs directly off
>> the catalog.
>>
>
> and you haven't created those buildconfigs previously and then deleted
> them, within that project?  This is the first time you're creating the
> buildconfig in the project?
>
>
>>
>> My first thought was that the network issue was causing the build pod to
>> be terminated and a second one is being created before the first one dies.
>> Is that a possibility?
>>
>
> I don't think so, once we create a pod for a given build we consider it
> done and shouldn't be creating another one, but the logs should shed some
> light on that.
>
>
>> I will get the logs tomorrow.
>>
>> On Sun, 28 Jan 2018 at 1:51 pm, Ben Parees  wrote:
>>
>>> On Sat, Jan 27, 2018 at 4:06 PM, Lionel Orellana 
>>> wrote:
>>>
 Hi,

 I'm seeing an random error when running builds. Some builds fail very
 quickly with "build pod already exists". This is happening with a number of
 build configs and seems to occur when more than one build from different
 build configs are running at the same time.

>>>
>>> This can happen if you "reset" the build sequence number in your
>>> buildconfig (buildconfig.status.latestversion).  Are you recreating
>>> buildconfigs that may have previously existed and run builds, or editing
>>> the buildconfig in a way that might be reseting the lastversion field to an
>>> older value?
>>>
>>> Are you able to confirm whether or not a pod does exist for that build?
>>> The build pod name will be in the form like "buildconfigname-buildnumber-b
>>> uild"
>>>
>>> If you're able to recreate this consistently(assuming you're sure it's
>>> not a case of you having recreated an old buildconfig or reset the
>>> buildconfig lastversion sequence value), enabling level 4 logging in your
>>> master and reproducing it and then providing us with the logs would be
>>> helpful to trace what is happening.
>>>
>>>
>>>

 There is a possibly related error in one of the nodes:

 Jan 28 07:26:39  atomic-openshift-node[10121]: W0128 07:26:39.735522
 10848 docker_sandbox.go:266] NetworkPlugin cni failed on the stat
 us hook for pod "nodejs-26-build_bimorl": Unexpected command output
 nsenter: cannot open : No such file or directory
 Jan 28 07:26:39 atomic-openshift-node[10121]: with error: exit status 1

>>>
>>> This shouldn't be related to issues with pod creation (since this error
>>> wouldn't occur until after the pod is created and attempting to run on a
>>> node), but it's definitely something you'll want to sort out.  I've CCed
>>> our networking lead into this thread.
>>>
>>>
>>>

 -bash-4.2$ oc version
 oc v3.6.0+c4dd4cf
 kubernetes v1.6.1+5115d708d7

 Any ideas?

 Thanks


 Lionel.

 ___
 users mailing list
 users@lists.openshift.redhat.com
 http://lists.openshift.redhat.com/openshiftmm/listinfo/users


>>>
>>>
>>> --
>>> Ben Parees | OpenShift
>>>
>>>
>
>
> --
> Ben Parees | OpenShift
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Error response from daemon: No such container: origin-node when Deploying OpenShift Origin v3.7.0 on CentOS 7

2018-03-02 Thread Anda Nicolae
Hi all,

I have a CentOS 7 server and I have created 2 virtual machines (VMS) on this 
server. CentOS 7 is also the OS that runs on the VMs.
I am trying to deploy OpenShift Origin v3.7.0 on these 2 VMs (1st VM is the 
master and the 2nd VM is the node)

I have run the following commands:
ansible-playbook -i /etc/ansible/hosts 
/root/openshift-ansible/playbooks/prerequisites.yml
which successfully finishes, and:

ansible-playbook -vvv -i /etc/ansible/hosts 
/root/openshift-ansible/playbooks/deploy_cluster.yml
which fails with the following error:

fatal: []: FAILED! => {
"attempts": 3,
"changed": false,
"invocation": {
"module_args": {
"daemon_reload": false,
"enabled": null,
"masked": null,
"name": "origin-node",
"no_block": false,
"state": "restarted",
"user": false
}
}
}

MSG:

Unable to restart service origin-node: Job for origin-node.service failed 
because the control process exited with error code. See "systemctl status 
origin-node.service" and "journalctl -xe" for de
   tails.


META: ran handlers
to retry, use: --limit 
@/root/openshift-ansible/playbooks/deploy_cluster.retry

PLAY RECAP 
***
  : ok=407  changed=75   unreachable=0
failed=1
 : ok=53   changed=6unreachable=0
failed=0
localhost   : ok=13   changed=0
unreachable=0failed=0

>From journalctl -xe, I have:
umounthook : c0be9a269c90: Failed to read directory 
/usr/share/oci-umount/oci-umount.d: No such file or directory
master-centos.mydomain.com origin-node[49477]: Error response from daemon: No 
such container: origin-node

docker ps shows:
[root@master-centos ~]# docker ps
CONTAINER IDIMAGECOMMAND
  CREATED STATUS  PORTS   NAMES
6e36d7f538daopenshift/origin:v3.7.1  
"/usr/bin/openshift s"   29 minutes ago  Up 29 minutes  
 origin-master-controllers
2c4136affcd5openshift/origin:v3.7.1  
"/usr/bin/openshift s"   29 minutes ago  Up 29 minutes  
 origin-master-api
308a746e77f5registry.fedoraproject.org/latest/etcd   "/usr/bin/etcd"
  29 minutes ago  Up 29 minutes   etcd_container
ef9f2c58c8f7openshift/openvswitch:v3.7.1 
"/usr/local/bin/ovs-r"   29 minutes ago  Up 29 minutes


Below is my hosts file:
cat /etc/ansible/hosts
masters
etcd
nodes

[OSEv3:vars]
openshift_master_default_subdomain=apps.mydomain.com
ansible_ssh_user=root
ansible_become=yes
containerized=true
openshift_release=v3.7
openshift_image_tag=v3.7.1
openshift_pkg_version=-3.7.0
openshift_master_cluster_method=native
openshift_master_cluster_hostname=internal-master-centos.mydomain.com
openshift_master_cluster_public_hostname=master-centos.mydomain.com
deployment_type=origin

os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_master_overwrite_named_certificates=true
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 
'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': 
'/etc/origin/htpasswd'}]
openshift_docker_options='--selinux-enabled --insecure-registry 172.30.0.0/16'
openshift_router_selector='region=infra'
openshift_registry_selector='region=infra'
openshift_master_api_port=443
openshift_master_console_port=443
openshift_disable_check=memory_availability,disk_availability,docker_image_availability

[masters]
 openshift_hostname=master-centos.mydomain.com

[etcd]
 openshift_hostname=master-centos.mydomain.com

[nodes]
 openshift_hostname=master-centos.mydomain.com 
openshift_schedulable=false
 openshift_hostname=slave-centos.mydomain.com 
openshift_node_labels="{'router':'true','registry':'true'}"

Do you have any idea why my deployment fails?

Thanks,
Anda
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users