Not sure if this is a SDN bug or config matter, but i have 5 nodes and just
one is acting weird.
After a pod is scheduled on that node, i can't reach the service through
ClusterIP from another node or from other pod but using the node address it
works.
$ oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
AGE
app 172.30.162.181 <none> 8080/TCP
deploymentconfig=app 45d
elasticsearch172.30.181.189 <none> 9200/TCP name=elasticsearch
55d
eva 172.30.177.88 <none> 8080/TCP
deploymentconfig=eva 39d
postgresql 172.30.51.228 <none> 5432/TCP name=postgresql
55d
$ oc get endpoints
NAME ENDPOINTS AGE
app 10.1.2.4:8080 45d
elasticsearch 10.1.2.15:9200 55d
eva 10.1.3.30:8080 39d
postgresql 10.1.3.27:5432 55d
$curl -v 172.30.181.189:9200 -v
* About to connect() to 172.30.181.189 port 9200 (#0)
* Trying 172.30.181.189...
* Connection refused
* Failed connect to 172.30.181.189:9200; Connection refused
* Closing connection 0
$ curl -v 10.1.2.15:9200
* About to connect() to 10.1.2.15 port 9200 (#0)
* Trying 10.1.2.15...
* Connected to 10.1.2.15 (10.1.2.15) port 9200 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.1.2.15:9200
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/json; charset=UTF-8
< Content-Length: 294
<
{
"status" : 200,
"name" : "Jaren",
"version" : {
"number" : "1.1.2",
"build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7",
"build_timestamp" : "2014-05-22T12:27:39Z",
"build_snapshot" : false,
"lucene_version" : "4.7"
},
"tagline" : "You Know, for Search"
}
* Connection #0 to host 10.1.2.15 left intact
- Tried to reboot the node without luck
- Tried to evacuate the node and pods are happy.
---
Diego Castro / The CloudFather
GetupCloud.com - Eliminamos a Gravidade
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users