Hello :)
So, the problem seems to be in node :(

I tested in pod with  dns-utils on board  I was not able to resolved svc
name.
I've performed the same tests directly on node where affected pod is
running, and just like I thought , I'm not able to resolved the svc names
either - connection time out.

Now the question what is blocking this request. The nodes are reported as
"Ready". Can access to all services via IP. Only the DNS requests are
marked as "connection time out, or Could not resolve host:
kubernetes.default.svc; Temporary failure in name resolution

On perfectly working node, I can easly resolve svc names. Even directly
from node.





2017-10-23 9:19 GMT+02:00 Marko Lukša <[email protected]>:

> Everything looks fine (except for the double nameserver entry, but that
> shouldn't cause any problems).
>
> Try running the following:
>
>     oc run dnstest -it --restart Never --rm --image tutum/dnsutils bash
>
> And then try looking up your service through the container's default
> nameserver:
>
>     dig kibanasg +short +search
>
> and through the DNS running on the master:
>
>     dig kibanasg +short +search @kubernetes.default
> or, if that doesn't work, use the master's IP:
>
>     dig kibanasg +short +search @ip_of_openshift_master
> You can also try looking up the kubernetes service:
>
>     dig kubernetes.default +short +search @ip_of_openshift_master
> This should give you an idea of where the problem is. If you get the
> service IP when querying the master directly, then the problem is obviously
> in the node's dnsmasq.
>
> M.
>
>
> On 23. 10. 2017 08:16, Łukasz Strzelec wrote:
>
> Ok, let's see:
>
> I've checked  the /etc/resolv.conf in pod:
>
> [image: Obraz w treści 2]
>
> the 10.111.208.195  is the node ip -on which the dnsmasq was started.
> What is strange the pod is obtaining duplicated entries.
>
> On node, the /etc/resolv.conf has entries:
> [image: Obraz w treści 3]
>
> The origin node config file has got exactly the same ip address which is
> the  IP of node.
>
> Best regards
>
> 2017-10-20 15:11 GMT+02:00 Łukasz Strzelec <[email protected]>:
>
>>
>> Ok, let's see:
>>
>> I've checked  the /etc/resolv.conf in pod:
>>
>> [image: Obraz w treści 2]
>>
>> the 10.111.208.195  is the node ip -on which the dnsmasq was started.
>> What is strange the pod is obtaining duplicated entries.
>>
>> On node, the /etc/resolv.conf has entries:
>> [image: Obraz w treści 3]
>>
>> The origin node config file has got exactly the same ip address which is
>> the  IP of node.
>>
>> Best regards
>>
>> 2017-10-20 14:27 GMT+02:00 Marko Lukša <[email protected]>:
>>
>>> OK, then you're most likely not using the correct DNS server (or you're
>>> using the correct one, but it's not properly updating its entries; but I
>>> doubt that's the case).
>>>
>>> Check the contents of /etc/resolv.conf. Usually, the "nameserver" line
>>> should point to the IP of the node the pod is running on. The node runs
>>> dnsmasq, which is what your pods should use as the DNS server. This is
>>> configured in the node's /etc/origin/node/node-config.yaml file (look
>>> for the dnsIP entry). If you configured it to bypass the local dnsmasq, you
>>> won't be able to resolve internal addresses.
>>>
>>> M.
>>>
>>>
>>> On 20. 10. 2017 13:28, Łukasz Strzelec wrote:
>>>
>>>
>>> Hi Marko
>>>
>>> 1. yes
>>> 2. I did and  the result is the same - name or service not known :(
>>> Have no clue what is wrong - pods on other nodes are working fine
>>>
>>> Best regards :)
>>>
>>> 2017-10-20 11:19 GMT+02:00 Marko Lukša <[email protected]>:
>>>
>>>> 1. is the service in the same namespace as the pod you're testing in?
>>>>
>>>> 2. connect through the FQDN of the service (
>>>> kibanasg.fullnamespace.svc.cluster.local)
>>>>
>>>>
>>>> On 20. 10. 2017 11:14, Łukasz Strzelec wrote:
>>>>
>>>> Thx guys ;)Nope, this is not this case.
>>>> I've notice that I can reach SVC via IP addresses. But when I want do
>>>> the same with  name of svc, I'm recieving  "name or service not known".
>>>> Where to start debugging ?
>>>>
>>>> Best regards
>>>>
>>>> 2017-10-19 15:27 GMT+02:00 Mateus Caruccio <
>>>> [email protected]>:
>>>>
>>>>> Alpine's musl libc only supports "search" starting from version 1.1.13.
>>>>> Check if this is your case.
>>>>>
>>>>> --
>>>>> Mateus Caruccio / Master of Puppets
>>>>> GetupCloud.com
>>>>> We make the infrastructure invisible
>>>>> Gartner Cool Vendor 2017
>>>>>
>>>>> 2017-10-19 10:58 GMT-02:00 Cameron Braid <[email protected]>:
>>>>>
>>>>>> I had that happen quite a bit within containers based on alpine linux
>>>>>>
>>>>>> Cam
>>>>>>
>>>>>> On Thu, 19 Oct 2017 at 23:49 Łukasz Strzelec <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Dear all :)
>>>>>>>
>>>>>>> I have following problem:
>>>>>>>
>>>>>>> [image: Obraz w treści 1]
>>>>>>>
>>>>>>>
>>>>>>> Frequently I have to restart origin-node to solve this issue, but I
>>>>>>> can't find  the root cause of it.
>>>>>>> Does anybody has got any idea ? Where to start looking ?
>>>>>>> In addition , this problem is affecting different cluster nodes -
>>>>>>> randomly diffrent pods have got this issues.
>>>>>>>
>>>>>>>
>>>>>>> Best regards
>>>>>>> --
>>>>>>> Ł.S.
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> [email protected]
>>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> [email protected]
>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Ł.S.
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing 
>>>> [email protected]http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> [email protected]
>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>
>>>>
>>>
>>>
>>> --
>>> Ł.S.
>>>
>>>
>>>
>>
>>
>> --
>> Ł.S.
>>
>
>
>
> --
> Ł.S.
>
>
>


-- 
Ł.S.
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to