A basis change needed by spark is setting the executor memory which
defaults to 512MB by default.
On Mon, Mar 23, 2015 at 10:16 AM, Denny Lee wrote:
> How are you running your spark instance out of curiosity? Via YARN or
> standalone mode? When connecting Spark thriftserver to the Spark servic
How are you running your spark instance out of curiosity? Via YARN or
standalone mode? When connecting Spark thriftserver to the Spark service,
have you allocated enough memory and CPU when executing with spark?
On Sun, Mar 22, 2015 at 3:39 AM fanooos wrote:
> We have cloudera CDH 5.3 installe
day, March 4, 2015 5:07 AM
> *To:* Cheng, Hao
> *Subject:* Re: Spark SQL Thrift Server start exception :
> java.lang.ClassNotFoundException:
> org.datanucleus.api.jdo.JDOPersistenceManagerFactory
>
>
>
> Hi,
>
>
>
> I am getting the same error. There is no lib folder in m
” while starting the spark shell.
From: Anusha Shamanur [mailto:anushas...@gmail.com]
Sent: Wednesday, March 4, 2015 5:07 AM
To: Cheng, Hao
Subject: Re: Spark SQL Thrift Server start exception :
java.lang.ClassNotFoundException:
org.datanucleus.api.jdo.JDOPersistenceManagerFactory
Hi,
I am getting
Copy those jars into the $SPARK_HOME/lib/
datanucleus-api-jdo-3.2.6.jar
datanucleus-core-3.2.10.jar
datanucleus-rdbms-3.2.9.jar
see https://github.com/apache/spark/blob/master/bin/compute-classpath.sh#L120
-Original Message-
From: fanooos [mailto:dev.fano...@gmail.com]
Sent: Tuesday, M
Thanks Michael.
Is there a way to specify off_heap? I.e. Tachyon via the thrift server?
Thanks!
On Tue, Aug 5, 2014 at 11:06 AM, Michael Armbrust
wrote:
> We are working on an overhaul of the docs before the 1.1 release. In the
> mean time try: "CACHE TABLE ".
>
>
> On Tue, Aug 5, 2014 at 9:
We are working on an overhaul of the docs before the 1.1 release. In the
mean time try: "CACHE TABLE ".
On Tue, Aug 5, 2014 at 9:02 AM, John Omernik wrote:
> I gave things working on my cluster with the sparksql thrift server.
> (Thank you Yin Huai at Databricks!)
>
> That said, I was curious