Hi
I am developing an application based on spark-1.6. my lib dependencies is
just as
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.6.0"
)
it use hadoop 2.2.0 as the default hadoop version which not my
preference.I want to change the hadoop versio when import spark .How
Hi
Just know akka is under a commercial license,however Spark is under the
apache
license.
Is there any problem?
Regards
Hi
I am also curious about this question.
The textFile function was supposed to read a hdfs file? In this case
,It is on local filesystem that the file was taken in.There are any
recognization ways to identify the local filesystem and the hdfs in the
textFile function?
Beside, the OOM
Hi
I am studing the structure of the Spark Streaming(my spark version is
0.9.0). I have a question about the SocketReceiver.In the onStart function:
---
protected def onStart() {
logInfo(Connecting to + host + : + port)
val socket
Hi
The 512MB is the default memory size which each executor needs. and
actually, your job does not need as much as the default memory size. you
can create a SparkContext with
sc = new SparkContext(local-cluster[2,1,512], test) // suppose you use
the local-cluster model.
Here the 512 is the