871921256 opened a new issue #133:
URL: https://github.com/apache/pulsar-helm-chart/issues/133


   **Describe the bug**
   A clear and concise description of what the bug is.
   
   **To Reproduce**
   Steps to reproduce the behavior:
   1. Go to 'K8S+helm + pulsar 2.7.2'  prepare
   ./scripts/pulsar/prepare_helm_release.sh -n pulsar-dev -k pulsar-dev 
--symmetric
   2. install  pulsar
   helm upgrade --install  pulsar-dev $NAMESPACE/pulsar \
    --timeout 10m \
    --set namespace=pulsar-dev \
    --set namespaceCreate=false \
    --set initialize=true \
    --set affinity.anti_affinity=true \
    --set auth.authentication.enabled=true \
    --set auth.authentication.jwt.usingSecretKey=true \
    --set auth.authorization.enabled=true \
    --set bookkeeper.volumes.journal.storageClassName=managed-nfs-storage2 \
    --set bookkeeper.volumes.ledgers.storageClassName=managed-nfs-storage
   
   all pods are running :
   [Martin@iZwz9cs3943soptmvn8mbrZ ~]$ kubectl get pod  -n pulsar-dev           
           
   NAME                                         READY   STATUS      RESTARTS   
AGE
   nfs-client-provisioner-5c997cc6c8-kpv99      1/1     Running     0          
6d11h
   nfs-client-provisioner2-8675974896-rmzpv     1/1     Running     0          
6d8h
   pulsar-dev-bookie-0                          1/1     Running     0          
26m
   pulsar-dev-bookie-1                          1/1     Running     0          
26m
   pulsar-dev-bookie-2                          1/1     Running     0          
26m
   pulsar-dev-bookie-init-42p64                 0/1     Completed   0          
26m
   pulsar-dev-broker-0                          1/1     Running     1          
26m
   pulsar-dev-broker-1                          1/1     Running     1          
26m
   pulsar-dev-broker-2                          1/1     Running     0          
26m
   pulsar-dev-grafana-9468669c6-sgbh7           1/1     Running     0          
26m
   pulsar-dev-prometheus-59bddccd9c-mq56b       1/1     Running     0          
26m
   pulsar-dev-proxy-0                           1/1     Running     0          
26m
   pulsar-dev-proxy-1                           1/1     Running     0          
26m
   pulsar-dev-proxy-2                           1/1     Running     0          
26m
   pulsar-dev-pulsar-init-gwpzm                 0/1     Completed   0          
26m
   pulsar-dev-pulsar-manager-67b964b786-fr87l   1/1     Running     0          
26m
   pulsar-dev-recovery-0                        1/1     Running     0          
26m
   pulsar-dev-toolset-0                         1/1     Running     0          
26m
   pulsar-dev-zookeeper-0                       1/1     Running     0          
26m
   pulsar-dev-zookeeper-1                       1/1     Running     0          
25m
   pulsar-dev-zookeeper-2                       1/1     Running     0          
25m
   
   3. go to  pulsar manager and  New Environment
   report  "This environment is error. Please check it"
   
   3. check the logs :    kubectl logs  pulsar-dev-proxy-1 -n pulsar-dev 
   
   4. See error
   "11:57:21.249 [pulsar-proxy-io-2-1] INFO  
org.apache.pulsar.proxy.server.ProxyConnection - [/10.92.251.122:38809] New 
connection opened
   11:57:21.249 [pulsar-proxy-io-2-1] WARN  
io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, 
and it reached at the tail of the pipeline. It usually means the last handler 
in the pipeline did not handle the exception.
   io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
Connection reset by peer
   11:57:21.249 [pulsar-proxy-io-2-1] WARN  
org.apache.pulsar.proxy.server.ProxyConnection - [/10.92.251.122:38809] Got 
exception NativeIoException : readAddress(..) failed: Connection reset by peer 
null
   11:57:21.249 [pulsar-proxy-io-2-1] INFO  
org.apache.pulsar.proxy.server.ProxyConnection - [/10.92.251.122:38809] 
Connection closed
   11:57:21.852 [pulsar-proxy-io-2-2] INFO  
org.apache.pulsar.proxy.server.ProxyConnection - [/10.92.251.124:65366] New 
connection opened
   11:57:21.852 [pulsar-proxy-io-2-2] WARN  
io.netty.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, 
and it reached at the tail of the pipeline. It usually means the last handler 
in the pipeline did not handle the exception.
   io.netty.channel.unix.Errors$NativeIoException: readAddress(..) failed: 
Connection reset by peer
   11:57:21.852 [pulsar-proxy-io-2-2] WARN  
org.apache.pulsar.proxy.server.ProxyConnection - [/10.92.251.124:65366] Got 
exception NativeIoException : readAddress(..) failed: Connection reset by peer 
null"
   
   
   If I do not  used this  two parameters  install, there is no problem:
    --set auth.authentication.enabled=true 
    --set auth.authentication.jwt.usingSecretKey=true 
   
   **Expected behavior**
   A clear and concise description of what you expected to happen.
   
   **Screenshots**
   If applicable, add screenshots to help explain your problem.
   
   **Desktop (please complete the following information):**
    - OS: [e.g. iOS]
   
   **Additional context**
   Add any other context about the problem here.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to