Hi,
you have to specify the worker nodes of the spark cluster at the time of
configurations of the cluster.
Thanks
Madhvi
On Thursday 30 April 2015 01:30 PM, xiaohe lan wrote:
Hi Madhvi,
If I only install spark on one node, and use spark-submit to run an
application, which are the Worker
Hi,
Follow the instructions to install on the following link:
http://mbonaci.github.io/mbo-spark/
You dont need to install spark on every node.Just install it on one node
or you can install it on remote system also and made a spark cluster.
Thanks
Madhvi
On Thursday 30 April 2015 09:31 AM
Thankyou Deepak.It worked.
Madhvi
On Tuesday 28 April 2015 01:39 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:
val conf = new SparkConf()
.setAppName(detail)
.set("spark.serializer",
"org.apache.spark.serializer.KryoSerializer")
.set("spark.kryoserializer.b
quot;buffersize").get)
.set("spark.kryoserializer.buffer.max.mb",
arguments.get("maxbuffersize").get)
.set("spark.driver.maxResultSize",
arguments.get("maxResultSize").get)
.registerKryoClasses(Array(classOf[org.apache.accumulo.core.data.Key]))
d accumulo can be used
with spark
Thanks
Madhvi
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
at 12:19 PM, Akhil Das
mailto:ak...@sigmoidanalytics.com>> wrote:
Change your import from mapred to mapreduce. like :
import org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat;
Thanks
Best Regards
On Wed, Apr 22, 2015 at 2:42 PM, madhvi mailto:madhvi.gu...
nts:
import org.apache.accumulo.core.client.mapred.AccumuloInputFormat;
import org.apache.accumulo.core.data.Key;
import org.apache.accumulo.core.data.Value;
I am not getting what is the problem in this.
Thanks
Madhvi
-
To unsubscribe
On Tuesday 21 April 2015 12:12 PM, Akhil Das wrote:
Your spark master should be spark://swetha:7077 :)
Thanks
Best Regards
On Mon, Apr 20, 2015 at 2:44 PM, madhvi <mailto:madhvi.gu...@orkash.com>> wrote:
PFA screenshot of my cluster UI
Thanks
On Monday 20 April 2015
Hi all,
Is there anything to integrate spark with accumulo or make spark to
process over accumulo data?
Thanks
Madhvi Gupta
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
.
On Mon, Apr 20, 2015 at 3:05 PM, madhvi <mailto:madhvi.gu...@orkash.com>> wrote:
On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:
Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher
than what is available.
Please request for 256 MB e
On Monday 20 April 2015 02:52 PM, SURAJ SHETH wrote:
Hi Madhvi,
I think the memory requested by your job, i.e. 2.0 GB is higher than
what is available.
Please request for 256 MB explicitly while creating Spark Context and
try again.
Thanks and Regards,
Suraj Sheth
Tried the same but still
are you allocating for your job? Can
you share a screenshot of your cluster UI and the code snippet that
you are trying to run?
Thanks
Best Regards
On Mon, Apr 20, 2015 at 12:37 PM, madhvi <mailto:madhvi.gu...@orkash.com>> wrote:
Hi,
I Did the same you told but now it is givin
Hi,
I Did the same you told but now it is giving the following error:
ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler:
All masters are unresponsive! Giving up.
On UI it is showing that master is working
Thanks
Madhvi
On Monday 20 April 2015 12:28 PM, Akhil Das wrote:
In
=2
export SPARK_EXECUTOR_MEMORY=1g
I am running the spark standalone cluster.In cluster UI it is showing
all workers with allocated resources but still its not working.
what other configurations are needed to be changed?
Thanks
Madhvi Gupta
14 matches
Mail list logo