briedel opened a new issue, #335: URL: https://github.com/apache/pulsar-helm-chart/issues/335
When using the helm chart on GKE, the proxy is unable to connect to port 80, which crashes the proxy pod and begins a restart loop. The logs how this ``` Defaulted container "pulsar-300-proxy" out of: pulsar-300-proxy, wait-zookeeper-ready (init), wait-broker-ready (init) [conf/proxy.conf] Applying config authenticationEnabled = true [conf/proxy.conf] Applying config authenticationProviders = org.apache.pulsar.broker.authentication.AuthenticationProviderToken [conf/proxy.conf] Applying config authorizationEnabled = false [conf/proxy.conf] Applying config brokerClientAuthenticationParameters = file:///pulsar/tokens/proxy/token [conf/proxy.conf] Applying config brokerClientAuthenticationPlugin = org.apache.pulsar.client.impl.auth.AuthenticationToken [conf/proxy.conf] Applying config brokerServiceURL = pulsar://pulsar-300-broker:6650 [conf/proxy.conf] Applying config brokerWebServiceURL = http://pulsar-300-broker:8080 [conf/proxy.conf] Applying config clusterName = pulsar-300 [conf/proxy.conf] Applying config forwardAuthorizationCredentials = true [conf/proxy.conf] Applying config httpNumThreads = 8 [conf/proxy.conf] Applying config servicePort = 6650 [conf/proxy.conf] Applying config statusFilePath = /pulsar/status [conf/proxy.conf] Applying config superUserRoles = admin,broker-admin,proxy-admin [conf/proxy.conf] Applying config tokenPublicKey = file:///pulsar/keys/token/public.key [conf/proxy.conf] Applying config webServicePort = 80 WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/pulsar/lib/io.netty-netty-common-4.1.77.Final.jar) to constructor java.nio.DirectByteBuffer(long,int) WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release 2022-11-16T17:17:00,431+0000 [main] INFO org.apache.pulsar.broker.authentication.AuthenticationService - [org.apache.pulsar.broker.authentication.AuthenticationProviderToken] has been loaded. 2022-11-16T17:17:00,804+0000 [main] INFO org.apache.pulsar.proxy.extensions.ProxyExtensionsUtils - Searching for extensions in /pulsar/./proxyextensions 2022-11-16T17:17:00,806+0000 [main] WARN org.apache.pulsar.proxy.extensions.ProxyExtensionsUtils - extension directory not found 2022-11-16T17:17:00,889+0000 [main] INFO org.eclipse.jetty.util.log - Logging initialized @2923ms to org.eclipse.jetty.util.log.Slf4jLog 2022-11-16T17:17:01,052+0000 [main] INFO org.apache.pulsar.proxy.server.ProxyService - Started Pulsar Proxy at /0.0.0.0:6650 2022-11-16T17:17:01,375+0000 [main] INFO org.eclipse.jetty.server.Server - jetty-9.4.48.v20220622; built: 2022-06-21T20:42:25.880Z; git: 6b67c5719d1f4371b33655ff2d047d24e171e49a; jvm 11.0.16+8-post-Ubuntu-0ubuntu120.04 2022-11-16T17:17:01,423+0000 [main] INFO org.eclipse.jetty.server.session - DefaultSessionIdManager workerName=node0 2022-11-16T17:17:01,424+0000 [main] INFO org.eclipse.jetty.server.session - No SessionScavenger set, using defaults 2022-11-16T17:17:01,427+0000 [main] INFO org.eclipse.jetty.server.session - node0 Scavenging every 660000ms 2022-11-16T17:17:01,446+0000 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@4349754{/metrics,null,AVAILABLE} 2022-11-16T17:17:02,121+0000 [main] WARN org.glassfish.jersey.server.wadl.WadlFeature - JAXBContext implementation could not be found. WADL feature is disabled. 2022-11-16T17:17:02,500+0000 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@b967222{/,null,AVAILABLE} 2022-11-16T17:17:02,557+0000 [main] WARN org.glassfish.jersey.server.wadl.WadlFeature - JAXBContext implementation could not be found. WADL feature is disabled. 2022-11-16T17:17:02,661+0000 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@6a2eea2a{/proxy-stats,null,AVAILABLE} 2022-11-16T17:17:02,711+0000 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@2bf94401{/admin,null,AVAILABLE} 2022-11-16T17:17:02,713+0000 [main] INFO org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@2532b351{/lookup,null,AVAILABLE} 2022-11-16T17:17:02,722Z [jdk.internal.loader.ClassLoaders$AppClassLoader@5ffd2b27] error Uncaught exception in thread main: Failed to start HTTP server on ports [80] java.io.IOException: Failed to start HTTP server on ports [80] at org.apache.pulsar.proxy.server.WebServer.start(WebServer.java:243) at org.apache.pulsar.proxy.server.ProxyServiceStarter.start(ProxyServiceStarter.java:223) at org.apache.pulsar.proxy.server.ProxyServiceStarter.main(ProxyServiceStarter.java:185) Caused by: java.io.IOException: Failed to bind to /0.0.0.0:80 at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:349) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:310) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.eclipse.jetty.server.Server.doStart(Server.java:401) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73) at org.apache.pulsar.proxy.server.WebServer.start(WebServer.java:221) ... 2 more Caused by: java.net.SocketException: Permission denied at java.base/sun.nio.ch.Net.bind0(Native Method) at java.base/sun.nio.ch.Net.bind(Net.java:459) at java.base/sun.nio.ch.Net.bind(Net.java:448) at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227) at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80) at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:344) ... 9 more ``` I have tried setting the tag to `2.10.2` ``` proxy: repository: apachepulsar/pulsar-all # uses defaultPulsarImageTag when unspecified tag: 2.10.2 pullPolicy: IfNotPresent ``` when starting the container proxy with port 8080, everything works: ``` proxy: # use a component name that matches your grafana configuration # so the metrics are correctly rendered in grafana dashboard component: proxy replicaCount: 3 autoscaling: enabled: false minReplicas: 1 maxReplicas: 3 metrics: ~ # This is how prometheus discovers this component podMonitor: enabled: true interval: 10s scrapeTimeout: 10s # True includes annotation for statefulset that contains hash of corresponding configmap, which will cause pods to restart on configmap change restartPodsOnConfigMapChange: false # nodeSelector: # cloud.google.com/gke-nodepool: default-pool probe: liveness: enabled: true failureThreshold: 10 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 readiness: enabled: true failureThreshold: 10 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 startup: enabled: false failureThreshold: 30 initialDelaySeconds: 60 periodSeconds: 10 timeoutSeconds: 5 affinity: anti_affinity: true # Set the anti affinity type. Valid values: # requiredDuringSchedulingIgnoredDuringExecution - rules must be met for pod to be scheduled (hard) requires at least one node per replica # preferredDuringSchedulingIgnoredDuringExecution - scheduler will try to enforce but not guranentee type: requiredDuringSchedulingIgnoredDuringExecution annotations: {} tolerations: [] gracePeriod: 30 resources: requests: memory: 128Mi cpu: 0.2 # extraVolumes and extraVolumeMounts allows you to mount other volumes # Example Use Case: mount ssl certificates # extraVolumes: # - name: ca-certs # secret: # defaultMode: 420 # secretName: ca-certs # extraVolumeMounts: # - name: ca-certs # mountPath: /certs # readOnly: true extraVolumes: [] extraVolumeMounts: [] extreEnvs: [] # - name: POD_IP # valueFrom: # fieldRef: # apiVersion: v1 # fieldPath: status.podIP ## Proxy configmap ## templates/proxy-configmap.yaml ## configData: PULSAR_MEM: > -Xms64m -Xmx64m -XX:MaxDirectMemorySize=64m PULSAR_GC: > -XX:+UseG1GC -XX:MaxGCPauseMillis=10 -Dio.netty.leakDetectionLevel=disabled -Dio.netty.recycler.linkCapacity=1024 -XX:+ParallelRefProcEnabled -XX:+UnlockExperimentalVMOptions -XX:+DoEscapeAnalysis -XX:ParallelGCThreads=4 -XX:ConcGCThreads=4 -XX:G1NewSizePercent=50 -XX:+DisableExplicitGC -XX:-ResizePLAB -XX:+ExitOnOutOfMemoryError -XX:+PerfDisableSharedMem httpNumThreads: "8" ## Add a custom command to the start up process of the proxy pods (e.g. update-ca-certificates, jvm commands, etc) additionalCommand: ## Proxy service ## templates/proxy-service.yaml ## ports: http: 8080 https: 443 pulsar: 6650 pulsarssl: 6651 service: annotations: {} type: LoadBalancer ## Proxy ingress ## templates/proxy-ingress.yaml ## ingress: enabled: false annotations: {} tls: enabled: false ## Optional. Leave it blank if your Ingress Controller can provide a default certificate. secretName: "" hostname: "" path: "/" ## Proxy PodDisruptionBudget ## templates/proxy-pdb.yaml ## pdb: usePolicy: true maxUnavailable: 1 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
