I'm using spark-on-kubernetes to submit spark app to kubernetes.
most of the time, it runs smoothly.
but sometimes, I see logs after submitting: the driver pod phase changed
from running to pending and starts another container in the pod though the
first container exited successfully.
The d
is this a bug?
Hadoop free spark on kubernetes
Using custom hadoop libraries in spark image
does not work with following the steps of the documentation (*)
for running spark pi on kubernetes cluster.
*Usage of hadoop free build:
https://spark.apache.org/docs/2.4.0/hadoop
this a bug?
Hadoop free spark on kubernetes
Using custom hadoop libraries in spark image
does not work with following the steps of the documentation (*)
for running spark pi on kubernetes cluster.
*Usage of hadoop free build:
https://spark.apache.org/docs/2.4.0/hadoop
this a bug?
Hadoop free spark on kubernetes
Using custom hadoop libraries in spark image
does not work with following the steps of the documentation (*)
for running spark pi on kubernetes cluster.
*Usage of hadoop free build:
https://spark.apache.org/docs/2.4.0/hadoop
at the current
state of the project(s)? What storage solution would be recommended
instead if spark on kubernetes is given (so no yarn/mesos)?
Looking forward to your input.
Arne
[1] https://databricks.com/session/hdfs-on-kubernetes-lessons-learned
[2] https://github.com/apache-spark-on
t
> spark.kubernetes.authenticate.driver.caCertFile
> to the path of your CA certificate on your local disk, spark-submit will
> create a secret that contains that certificate file and use that
> certificate to configure TLS for the driver pod’s communication with the
> API server.
icate it’s using for that communication. If there’s a fix
that needs to happen in Spark, feel free to indicate as such.
-Matt Cheah
From: Steven Stetzler
Date: Thursday, December 13, 2018 at 1:49 PM
To: "user@spark.apache.org"
Subject: Problem running Spark on Kubernetes: Ce
nt to my cluster access credentials, so both
Spark and kubectl can speak with the nodes.
I am running into an issue when trying to run the SparkPi example as
described in the Spark on Kubernetes tutorials. The command I am running
is:
./bin/spark-submit --master k8s://$CLUSTERIP --deploy-mode cl
by offheap storage from Spark
that won’t be accounted for in just the heap size.
Hope this helps,
-Matt Cheah
From: Jayesh Lalwani
Date: Thursday, August 2, 2018 at 12:35 PM
To: "user@spark.apache.org"
Subject: Spark on Kubernetes: Kubernetes killing executors
We are running Spark 2.3 on a Kubernetes cluster. We have set the following
spark configuration options
"spark.executor.memory": "7g",
"spark.driver.memory": "2g",
"spark.memory.fraction": "0.75"
WHat we see is
a) In the SPark UI, 5G has been allocated to each executor, which makes
sense
We are trying to run a Spark job on Kubernetes cluster. The Spark job needs to
talk to some services external to the Kubernetes cluster through a proxy
server. We are setting the proxy by setting the extraJavaOptions like this
--conf spark.executor.extraJavaOptions=" -Dhttps.proxyHost=myhost
-D
This is the problem:
> :/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar;/opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
Seems like some code is confusing things when mixing OSes. It's using
the Windows separator when building a command line ti be run on a
Linux host.
On Tue, Apr 1
Previous example was bad paste( I tried a lot of variants, so sorry for
wrong paste )
PS C:\WINDOWS\system32> spark-submit --master k8s://https://ip:8443
--deploy-mode cluster --name spark-pi --class
org.apache.spark.examples.SparkPi
--conf spark.executor.instances=1 --executor-memory 1G --conf
sp
The example jar path should be local:///opt/spark/examples/*jars*
/spark-examples_2.11-2.3.0.jar.
On Tue, Apr 10, 2018 at 1:34 AM, Dmitry wrote:
> Hello spent a lot of time to find what I did wrong , but not found.
> I have a minikube WIndows based cluster ( Hyper V as hypervisor ) and try
> to
Hello spent a lot of time to find what I did wrong , but not found.
I have a minikube WIndows based cluster ( Hyper V as hypervisor ) and try
to run examples against Spark 2.3. Tried several docker images builds:
* several builds that I build myself
* andrusha/spark-k8s:2.3.0-hadoop2.7 from docke
The Spark on Kubernetes development community is pleased to announce
release 0.5.0
of Apache Spark with Kubernetes as a native scheduler back-end!
This release includes a few bug fixes and the following features:
- Spark R support
- Kubernetes 1.8 support
- Mounts emptyDir volumes for
The Spark on Kubernetes development community is pleased to announce
release 0.4.0 of Apache Spark with native Kubernetes scheduler back-end!
The dev community is planning to use this release as the reference for
upstreaming native kubernetes capability over the Spark 2.3 release cycle.
This
>
>> +1 (non-binding)
>>
>>
>>
>> --
>> View this message in context: http://apache-spark-developers
>> -list.1001551.n3.nabble.com/SPIP-Spark-on-Kubernetes-tp22147p22164.html
>> Sent from the Apache Spark Developers List mailing list archive
The Apache Spark on Kubernetes Community Development Project is pleased to
announce the latest release of Apache Spark with native Scheduler Backend
for Kubernetes! Features provided in this release include:
-
Cluster-mode submission of Spark jobs to a Kubernetes cluster
-
Support
Come learn about the community development project to add a native
Kubernetes scheduling back-end to Apache Spark! Meet contributors
and network with community members interested in running Spark on
Kubernetes. Learn how to run Spark jobs on your Kubernetes cluster;
find out how to contribute to
101 - 120 of 120 matches
Mail list logo