The relevant CoreOS issue is https://github.com/coreos/bugs/issues/1114 for
more context.

I'd also like to throw in a quick plug for the kube-aws
<https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html> project
we maintain which might be relevant.

There's also a kubernetes issue
<https://github.com/kubernetes/kubernetes/issues/19765> discussing how to
capture the socat dependency of the Kubelet and not rely on a host copy,
which could resolve this specific issue, though it doesn't have a clear
consensus nor timeline.

I don't have a better answer than Kyle's, just wanted to add some extra
context in.

Best,
Euan



On Tue, Aug 16, 2016 at 11:31 AM, Kyle Brown <[email protected]> wrote:

> Norman,
>
> It seems that this repository isn't using our kubelet-wrapper script which
> ships with the dependencies (like socat) in the rkt fly container the
> kubelet is run in.
>
> See: https://coreos.com/kubernetes/docs/latest/kubelet-wrapper.html
>
> You should be able to modify the cloud-configs to make use of the
> kubelet-wrapper. I think it should be as easy as just editing the ExecStart
> of the kubelet.service unit to point to /usr/lib/coreos/kubelet-wrapper:
>
>     - name: kubelet.service
>       command: start
>       content: |
>         [Unit]
>         After=docker.socket
>         Requires=docker.socket
>         [Service]
>         Environment=KUBELET_VERSION=v1.3.4_coreos.0
>         ExecStart=/usr/lib/coreos/kubelet-wrapper \
>           --allow-privileged=true \
>           --api-servers=http://master.k8s:8080 \
>           --cloud-provider=aws \
>           --cluster-dns=10.3.0.10 \
>           --cluster-domain=cluster.local \
>           --config=/etc/kubernetes/manifests \
>           --kubeconfig=/etc/kubernetes/kubeconfig.yml \
>           --register-node=true \
>           --tls-cert-file=/etc/kubernetes/ssl/k8s-worker.pem \
>           --tls-private-key-file=/etc/kubernetes/ssl/k8s-worker-key.pem
>         Restart=always
>         RestartSec=5
>         [Install]
>         WantedBy=multi-user.target
>
> Cheers,
> Kyle Brown
>
>
>
> On Tue, Aug 16, 2016 at 11:07 AM, Norman Khine <[email protected]> wrote:
>
>> Hello, I have setup a multi-zone coreos cluster using
>> https://github.com/kz8s/tack all seems to work apart from when i try to
>> port-forward and i get this error
>>
>> ➜  ~ kubectl port-forward trint-mongodb-2-94074312-6uzb3 37017:27017
>> Forwarding from 127.0.0.1:37017 -> 27017
>> Forwarding from [::1]:37017 -> 27017
>> Handling connection for 37017
>> E0816 11:08:45.179697    9528 portforward.go:327] an error occurred
>> forwarding 37017 -> 27017: error forwarding port 27017 to pod
>> trint-mongodb-2-94074312-6uzb3_default, uid : unable to do port
>> forwarding: socat not found.
>> E0816 11:09:44.409445    9528 portforward.go:173] lost connection to pod
>> ➜  ~ kubectl get pods
>> NAME                               READY     STATUS    RESTARTS   AGE
>> trint-mongodb-1-4062934469-1v7ts   1/1       Running   0          49m
>> trint-mongodb-2-94074312-6uzb3     1/1       Running   0          1h
>> trint-mongodb-3-2820699583-cz5lz   1/1       Running   0          1h
>>
>>
>> any advise on how to resolve this?
>>
>
>

Reply via email to