Hi, your service should be headless, without any clusterIP: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
In your case 10.131.5.15 seems to be on of the pod IPs and 10.128.2.40 should be the service IP. All your nodes try to use the same service IP I beleive... Peter On Wed, Oct 24, 2018 at 9:40 PM a.toy <adam....@useitc.com> wrote: > Peter Wilcsinszky wrote > > what is your service yaml exactly? how do you create the pods is it a > > statefulset? > > My sts.yaml and service.yaml files are pretty straight-forward: > > */sts.yaml:/* > > apiVersion: apps/v1beta1 > kind: StatefulSet > metadata: > name: nifi > spec: > serviceName: nifi > replicas: 3 > updateStrategy: > type: RollingUpdate > template: > metadata: > labels: > app: nifi > spec: > containers: > - name: nifi > image: local-docker-registry.com/nifi:latest > imagePullPolicy: "Always" > ports: > - containerPort: 8080 > name: http > - containerPort: 8443 > name: https > - containerPort: 8081 > name: sock-port > - containerPort: 8082 > name: clust-port > env: > - name: NAMESPACE > value: infrastructure > - name: SERVICE_NAME > value: nifi > - name: POD_ID > valueFrom: > fieldRef: > fieldPath: metadata.name > > > /*service.yaml:*/ > > apiVersion: v1 > kind: Service > metadata: > name: nifi > spec: > ports: > - protocol: TCP > port: 8080 > targetPort: 8080 > name: http > - protocol: TCP > port: 8081 > targetPort: 8081 > name: sock-port > - protocol: TCP > port: 8082 > targetPort: 8082 > name: clust-port > selector: > app: nifi > > > Peter Wilcsinszky wrote > > how do you access the UI? does it have a separate service? > > I'm accessing the UI via an Openshift route (browser URI would be > 'http://nifi.example.com/nifi'): > > apiVersion: v1 > kind: Route > metadata: > name: nifi > spec: > host: nifi.example.com > to: > kind: Service > name: nifi > port: > targetPort: http > > > The docker image I'm using is custom to my project, but the only really > relevant part is that its entrypoint script grabs the FQDN of the Openshift > service and the node's IP and parses them into the nifi.properties file > before starting nifi: > > */entrypoint.sh*/ > > #!/bin/bash > > NIFI_HOME=${NIFI_HOME:=/opt/nifi} > > # Ensure necessary environment variables are set. > if [[ -z "${POD_ID}" ]]; then > echo "ERROR: Missing 'POD_ID' environment variable" > exit 1 > elif [[ -z "${SERVICE_NAME}" ]]; then > echo "ERROR: Missing 'SERVICE_NAME' environment > variable" > exit 1 > elif [[ -z "${NAMESPACE}" ]]; then > echo "ERROR: Missing 'NAMESPACE' environment > variable" > exit 1 > fi > > FQDN="$POD_ID.$SERVICE_NAME.$NAMESPACE.svc" > > echo > > "=============================================================================" > echo "Preconfiguring NiFi hostname (FQDN=$FQDN)" > echo > > "=============================================================================" > > # Remove properties from 'nifi.properties' if they exist. > sed -i -e '/^nifi.cluster.node.address=.*/d' > "$NIFI_HOME/conf/nifi.properties" > sed -i -e '/^nifi.web.http.host=.*/d' > "$NIFI_HOME/conf/nifi.properties" > sed -i -e '/^nifi.web.https.host=.*/d' > "$NIFI_HOME/conf/nifi.properties" > sed -i -e '/^nifi.remote.input.host=.*/d' > "$NIFI_HOME/conf/nifi.properties" > > node_ip=$(ip -f inet a show eth0| grep inet| awk '{ print > $2}' | cut -d/ > -f1) > echo "Acquired container's IP ($node_ip)" > > # Add the lines to the end of nifi.properties > echo "" >> "$NIFI_HOME/conf/nifi.properties" > echo "nifi.web.http.host=$FQDN" >> > "$NIFI_HOME/conf/nifi.properties" > echo "nifi.cluster.node.address=$node_ip" >> > "$NIFI_HOME/conf/nifi.properties" > echo "nifi.remote.input.host=$node_ip" >> > "$NIFI_HOME/conf/nifi.properties" > > echo "Starting nifi.." > sh "$NIFI_HOME/bin/nifi.sh" run > > > This version (which I had my most hope for) ended up with a config like: > > nifi.remote.input.secure=false > nifi.remote.input.socket.port=8081 > nifi.remote.input.http.enabled=true > nifi.remote.input.host=10.131.5.15 > > nifi.web.http.port=8080 > nifi.web.http.host=nifi-0.nifi.infrastructure.svc > nifi.web.http.network.interface.default=eth0 > > nifi.cluster.node.address=10.131.5.15 > nifi.cluster.node.protocol.port=8082 > > I figured the only hostname that needed to be the service name was the > 'http.host', but alas. Still no dice. > > Here's a SS of the error I'm seeing: > > < > http://apache-nifi-users-list.2361937.n4.nabble.com/file/t593/error_ss.png> > > > > Peter Wilcsinszky wrote > > what Zookeeper setup do you use? > > We use another custom ZK docker image we build, however this is what gets > announced for each node in ZK: > > [zk: localhost:2181(CONNECTED) 6] get '/nifi/leaders/Primary > Node/_c_1d4da8c5-76e0-4d19-9555-2658adf27167-lock-0000000217' > 10.128.2.40:8082 > cZxid = 0x10032330a > ctime = Wed Oct 24 19:10:44 UTC 2018 > mZxid = 0x10032330a > mtime = Wed Oct 24 19:10:44 UTC 2018 > pZxid = 0x10032330a > cversion = 0 > dataVersion = 0 > aclVersion = 0 > ephemeralOwner = 0x1656d4a582d5d61 > dataLength = 16 > numChildren = 0 > > [zk: localhost:2181(CONNECTED) 8] get '/nifi/leaders/Cluster > Coordinator/_c_4e00ef82-09c9-4e08-98ae-382cea354aaf-lock-0000000679' > 10.128.2.40:8082 > cZxid = 0x100323307 > ctime = Wed Oct 24 19:10:41 UTC 2018 > mZxid = 0x100323307 > mtime = Wed Oct 24 19:10:41 UTC 2018 > pZxid = 0x100323307 > cversion = 0 > dataVersion = 0 > aclVersion = 0 > ephemeralOwner = 0x656d4a69185bc3 > dataLength = 16 > numChildren = 0 > > > > > -- > Sent from: http://apache-nifi-users-list.2361937.n4.nabble.com/ >