Hey David, If you're running Origin 1.1+, you can omit dnsIP from the node config and Origin will default to using the kubernetes service ip. I'm not yet sure why that template is failing to evaluate (I suspect it has something to do with the version conditional) but, re: Jason's suggestion, we will be replacing the dns ip calculation with a filter that is easier to follow in https://github.com/openshift/openshift-ansible/pull/1588. That PR should merge very soon.
On Fri, Apr 15, 2016 at 10:04 AM, David Balakirev < [email protected]> wrote: > Thanks for the confirmation Jason! > > A small update since is that I realized I don't need to re-run the > installer evertime I want to check the host variables. > As described here, I can run the openshift_facts.yml playbook. > > > https://github.com/openshift/openshift-ansible/blob/master/README_origin.md#overriding-detected-ip-addresses-and-hostnames > > And the problem is present in the nicely formatted output. > > "dns_ip": "{# openshift_dns_ip | default(openshift_master_cluster_vip | > default(None if openshift.common.version_gte_3_1_or_1_1 | bool else > openshift_node_first_master_ip | default(None, true), true), true) #}", > > The expression is present in many places, so I don't yet see which one > gets used in my case. > Probably I should check the verbose logs. > > > On 04/15/2016 02:42 PM, Jason DeTiberus wrote: > > > On Apr 15, 2016 5:43 AM, "David Balakirev" < <[email protected]> > [email protected]> wrote: > > > > Hi all, > > > > After getting a bit of experience with the binary distribution, I am now > setting Origin via the Ansible playbook. > > > > At the end of the installation I run into some problems. > >> > >> TASK: [openshift_node | Start and enable node] > ******************************** > >> <node-001.adnopenshift-dev.mycompany.com> REMOTE_MODULE service > name=origin-node enabled=yes state=started > >> <master-001.adnopenshift-dev.mycompany.com> REMOTE_MODULE service > name=origin-node enabled=yes state=started > >> failed: [node-001.adnopenshift-dev.mycompany.com] => {"failed": true} > >> msg: Job for origin-node.service failed because the control process > exited with error code. See "systemctl status origin-node.service" and > "journalctl -xe" for details. > >> > >> failed: [master-001.adnopenshift-dev.mycompany.com] => {"failed": true} > >> msg: Job for origin-node.service failed because the control process > exited with error code. See "systemctl status origin-node.service" and > "journalctl -xe" for details. > > > > So I checked the journal log: > >> > >> could not load config file "/etc/origin/node/node-config.yaml" due to > an error: yaml: line 5: did not find expected ',' or '}' > > > > > > I believe what happens, is that in the node-config.yaml the following > expression on line 4 cannot be evaluated somehow (expression not terminated > properly), hence the problem on line 5. > > Yes, somehow the Ansible jinja2 template didn't get evaluated and was > output into the node config. > > We probably should be using a filter here to prevent failed evaluations > from getting written through. > > Scott or Andrew (cc'd) could you take a closer look? > > >> > >> allowDisabledDocker: false > >> apiVersion: v1 > >> dnsDomain: cluster.local > >> dnsIP: {# openshift_dns_ip | default(openshift_master_cluster_vip | > default(None if openshift.common.version_gte_3_1_or_1_1 | bool else > openshift_node_first_master_ip | default(None, true), true), true) #} > >> dockerConfig: > >> execHandlerName: "" > >> iptablesSyncPeriod: "5s" > >> imageConfig: > >> format: openshift/origin-${component}:${version} > >> latest: false > >> ... > > > > So I think it's the dnsIP that is problematic. > > > > I am not using HA, or pacemaker, hence I did not configure any special > settings (like openshift_master_cluster_vip). > > I have a simple host invetory file I use when I run the installation: > >> > >> # Create an OSEv3 group that contains the masters and nodes groups > >> [OSEv3:children] > >> masters > >> nodes > >> > >> # Set variables common for all OSEv3 hosts > >> [OSEv3:vars] > >> # SSH user, this user should allow ssh based auth without requiring a > password > >> ansible_ssh_user=root > >> > >> deployment_type=origin > >> > >> openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': > 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', > 'filename': '/etc/origin/master/htpasswd'}] > >> > >> # host group for masters > >> [masters] > >> master-001.adnopenshift-dev.mycompany.com > >> > >> # host group for nodes > >> [nodes] > >> master-001.adnopenshift-dev.mycompany.com openshift_schedulable=false > >> node-001.adnopenshift-dev.mycompany.com > > > > > > My assumption is that if I would modify the dnsIP in the > node-config.yaml file the nodes would be able to start. > > But I would like to learn the proper configuration for future (avoid any > manual steps if possible). > > > > DNS settings are configured as per the installation guide (prereq) I > believe. > > hostname -f also works on both (master and node). > > > > Thanks very much in advance! > > > > Kind regards, > > Dave > > > > > > -- > > > > David Balakirev, Senior Software Engineer > > > > AdNovum Hungary Kft. > > Kapás utca 11-15, H-1027 Budapest > > [email protected] > > www.adnovum.hu > > > > Locations: Zurich (HQ), Bern, Budapest, Ho Chi Minh City, Singapore > > > > > > _______________________________________________ > > dev mailing list > > [email protected] > > http://lists.openshift.redhat.com/openshiftmm/listinfo/dev > > > > > -- > > David Balakirev, Senior Software Engineer > > AdNovum Hungary Kft. > Kapás utca 11-15, H-1027 [email protected] > > Locations: Zurich (HQ), Bern, Budapest, Ho Chi Minh City, Singapore > >
_______________________________________________ dev mailing list [email protected] http://lists.openshift.redhat.com/openshiftmm/listinfo/dev
