Ferdinanddb commented on PR #249:
URL: 
https://github.com/apache/spark-kubernetes-operator/pull/249#issuecomment-3020482176

   Hi @dongjoon-hyun , thank you for your work here.
   
   I am trying to deploy a spark-history-server pod using your example, and I 
see it as running but I cannot port-forward it since the port 18080 is not 
being used by the pod (and so not used by the service).
   
   Could you please tell me what should I do to be able to fix my issue?
   
   I have the following config:
   ```yaml
   apiVersion: spark.apache.org/v1
   kind: SparkApplication
   metadata:
     name: spark-history-server
     namespace: spark-operator
   spec:
     mainClass: "org.apache.spark.deploy.history.HistoryServer"
     sparkConf:
       # spark.jars.packages: "org.apache.hadoop:hadoop-aws:3.4.1"
       spark.jars: 
"https://repo1.maven.org/maven2/com/google/cloud/bigdataoss/gcs-connector/3.1.1/gcs-connector-3.1.1-shaded.jar";
       spark.jars.ivy: "/tmp/.ivy2.5.2"
       spark.driver.memory: "2g"
       spark.kubernetes.namespace: spark-operator
       spark.kubernetes.authenticate.driver.serviceAccountName: "spark"
       spark.kubernetes.container.image: "apache/spark:4.0.0-java21-scala"
       spark.history.fs.cleaner.enabled: "true"
       spark.history.fs.cleaner.maxAge: "30d"
       spark.history.fs.cleaner.maxNum: "100"
       spark.history.fs.eventLog.rolling.maxFilesToRetain: "10"
       
       spark.hadoop.fs.gs.impl: 
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem"
       spark.hadoop.fs.AbstractFileSystem.gs.impl: 
"com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS"
       spark.hadoop.fs.gs.auth.service.account.enable: "true"
       # spark.hadoop.fs.gs.auth.type: "COMPUTE_ENGINE"
       spark.hadoop.fs.gs.project.id: "rcim-prod-data-core-0"
       spark.hadoop.fs.defaultFS: "gs://SOME-BUCKET-GCS-spark-logs"
       spark.history.fs.logDirectory: "gs://SOME-BUCKET-GCS-spark-logs"
     runtimeVersions:
       sparkVersion: "4.0.0"
     applicationTolerations:
       restartConfig:
         restartPolicy: Always
         maxRestartAttempts: 9223372036854775807
   ```
   
   I changed the value of my GCS bucket name, but again: I don't see any errors 
in the pod's logs.
   
   When I execute the following port-forward command I get this:
   ```
   kubectl port-forward svc/spark-history-server-0-driver-svc 8080:spark-ui 
--namespace spark-operator
   Forwarding from 127.0.0.1:8080 -> 4040
   Forwarding from [::1]:8080 -> 4040
   Handling connection for 8080
   E0630 21:06:51.716348 2639361 portforward.go:424] "Unhandled Error" err="an 
error occurred forwarding 8080 -> 4040: error forwarding port 4040 to pod 
76c12381718e2542ae6e371d8c28992dc9e541439cd53b832df5aa5aaaeddbac, uid : failed 
to execute portforward in network namespace 
\"/var/run/netns/cni-5208164a-41a3-2769-bfaf-6b6f31eb1c80\": failed to connect 
to localhost:4040 inside namespace 
\"76c12381718e2542ae6e371d8c28992dc9e541439cd53b832df5aa5aaaeddbac\", IPv4: 
dial tcp4 127.0.0.1:4040: connect: connection refused IPv6 dial tcp6 
[::1]:4040: connect: cannot assign requested address "
   ```
   
   Thank you very much if you can help!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to