Hi Xi Shen,
You could set the spark.executor.memory in the code itself . new
SparkConf()..set("spark.executor.memory", "2g")
Or you can try the -- spark.executor.memory 2g while submitting the jar.
Regards
Jishnu Prathap
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Monday, March 16
import com.google.gson.{GsonBuilder, JsonParser}
import org.apache.spark.mllib.clustering.KMeans
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.clustering.KMeans
/**
* Examine the collected tweets and trains a model based on th
Hi,
If your message is string you will have to Change Encoder and
Decoder to StringEncoder , StringDecoder.
If your message Is byte[] you can use DefaultEncoder & Decoder.
Also Don’t forget to add import statements depending on ur encoder and decoder.
import kafka.ser
Hi Akhil
Thanks for the response
Our use case is Object detection in multiple videos. It’s kind of searching
an image if present in the video by matching the image with all the frames of
the video. I am able to do it in normal java code using OpenCV lib now but I
don’t think it is scalable to
Hi
I am getting Stack overflow Error
Exception in main java.lang.stackoverflowerror
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Pars
Hi
I am getting Stack overflow Error
Exception in main java.lang.stackoverflowerror
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at scala.util.parsing.combinator.Pars
Hi
Thank you ☺Akhil it worked like charm…..
I used the file writer outside rdd.foreach that might be the reason for
nonserialisable exception….
Thanks & Regards
Jishnu Menath Prathap
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Friday, November 21, 2014 1:15 PM
To: Jishnu Menath Prat
Hi Akhil
Thanks for reply
But it creates different directories ..I tried using filewriter but it shows
non serializable error..
val stream = TwitterUtils.createStream(ssc, None) //, filters)
val statuses = stream.map(
status => sentimentAnalyzer.findSentiment({
stat
Hi I am also having similar problem.. any fix suggested..
Originally Posted by GaganBM
Hi,
I am trying to persist the DStreams to text files. When I use the inbuilt API
'saveAsTextFiles' as :
stream.saveAsTextFiles(resultDirectory)
this creates a number of subdirectories, for each batch, and w
Hi
My question is generic:
§ Is it possible to save the streams to one single file ? if yes can you give
me a link or code sample?
§ I tried using .saveastextfile but its creating different file for each
stream. I need to update the same file instead of creating different file for
Hi
Thanks Akhil you saved the day…. Its working perfectly …
Regards
Jishnu Menath Prathap
From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Thursday, November 13, 2014 3:25 PM
To: Jishnu Menath Prathap (WT01 - BAS)
Cc: Akhil [via Apache Spark User List]; user@spark.apache
Hi
I am getting the following error while running the
TwitterPopularTags example .I am using spark-1.1.0-bin-hadoop2.4 .
jishnu@getafix:~/spark/bin$ run-example TwitterPopularTags *** ** ** *** **
spark assembly has been built with Hive, including Datanucleus jars on classpath
j
Hi
I am trying to run a basic twitter stream program but getting blank
output. Please correct me if I am missing something.
import org.apache.spark.SparkConf
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.twitter.TwitterUtils
import org.apache.spark.st
Hi
Sorry for the repeated mails .My post was not accepted by the mailing list due
to some problem in postmas...@wipro.com I had to manually send it . Still it
was not visible for half an hour.I retried. But later all the post was visible.
I deleted it from the page but it was already delivered
No .. I am not passing any argument.
I am getting this error while starting the Master
The same spark binary i am able to run in another machine ( ubuntu )
installed.
The information contained in this electronic message and any attachments to
this message are intended for the excl
Hi ,
I am getting this weird error while starting Worker.
-bash-4.1$ spark-class org.apache.spark.deploy.worker.Worker
spark://osebi-UServer:59468
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/09/24 16:22:04 INFO worker.Worker: Registered signal handle
Hi ,
I am getting this weird error while starting Worker.
-bash-4.1$ spark-class org.apache.spark.deploy.worker.Worker
spark://osebi-UServer:59468
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/09/24 16:22:04 INFO worker.Worker: Registered signal handle
Hi ,
I am getting this weird error while starting Worker.
-bash-4.1$ spark-class org.apache.spark.deploy.worker.Worker
spark://osebi-UServer:59468
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/09/24 16:22:04 INFO worker.Worker: Registered signal handl
Hi ,
I am getting this weird error while starting Worker.
-bash-4.1$ spark-class org.apache.spark.deploy.worker.Worker
spark://osebi-UServer:59468
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/09/24 16:22:04 INFO worker.Worker: Registered signal handle
Hi Everyone i am new to spark ... I am posting some basic doubts
i met while trying to create a standalone cluster for a small poc ...
1)My Corporate firewall blocked the port 7077, which is the default port of
Master URL ,
So i used start-master.sh --port 8080 (also tried with several other po
The information contained in this electronic message and any attachments to
this message are intended for the exclusive use of the addressee(s) and may
contain proprietary, confidential or privileged information. If you are not the
intended recipient, you should not disseminate, distribute or
21 matches
Mail list logo