I had some errors like SqlBaseParser class missing, and figured out I
needed to get these classes from SqlBase.g4 using antlr4. It works fine now.
On Thu, Jun 23, 2016 at 9:20 AM, Jeff Zhang wrote:
> It works well with me. You can try reimport it into intellij.
>
> On Thu, Jun
I need a variable to be broadcasted from driver to executor processes in my
spark java application. I tried using spark broadcast mechanism to achieve,
but no luck there.
Could someone help me doing this, share some code probably ?
Thanks,
Praveen R
You might use bin/shark-withdebug to find the exact issue for the failure.
That said, easiest way to get the cluster running, is to get rid of
dis-functional machine from spark cluster (remove it from slaves file).
Hope that helps.
On Thu, May 22, 2014 at 9:04 PM, Yana Kadiyska
Do have cluster deployed on aws? Could you try checking if 7077 port is
accessible from worker nodes.
On Tue, Apr 22, 2014 at 2:56 AM, jaeholee jho...@lbl.gov wrote:
Hi, I am trying to set up my own standalone Spark, and I started the master
node and worker nodes. Then I ran
Could you try setting MASTER variable in spark-env.sh
export MASTER=spark://master-ip:7077
For starting the standalone cluster, ./sbin/start-all.sh should work as far
as you have password less access to all machines. Any error here ?
On Tue, Apr 22, 2014 at 10:10 PM, jaeholee jho...@lbl.gov
Please check my comment on the shark-users
threadhttps://groups.google.com/forum/#!searchin/shark-users/Failure$20recovery$20in$20Shark$20when$20cluster$20/shark-users/vUUGLZANxr8/MMCtKhqjhLMJ
.
On Tue, Apr 22, 2014 at 8:06 AM, rama0120 lakshminaarayana...@gmail.comwrote:
Hi,
I couldn't find
of your error
Wisely Chen
On Mon, Apr 14, 2014 at 9:29 PM, Praveen R
prav...@sigmoidanalytics.comwrote:
Had below error while running shark queries on 30 node cluster and was
not able to start shark server or run any jobs.
*14/04/11 19:06:52 ERROR scheduler.TaskSchedulerImpl: Lost
Can you try adding this to your spark-env file and sync to all hosts
export MASTER=spark://hadoop-pg-5.cluster:7077
On Sat, Apr 12, 2014 at 6:50 PM, ge ko koenig@gmail.com wrote:
Hi,
I'm starting using Spark and have installed Spark within CDH5 using
ClouderaManager.
I set up one
the result
which exist on bigdata003. while i run bin/shark on bigdata003, i can get
result.
though it is the reason, i still can not understand why the result is on
bigdata003(master is bigdata001)?
2014-03-25 18:41 GMT+08:00 Praveen R prav...@mobipulse.in:
Hi Qingyang Li,
Shark-0.9.0 uses
Hi Qingyang Li,
Shark-0.9.0 uses a patched version of hive-0.11 and using
configuration/metastore of hive-0.12 could be incompatible.
May I know the reason you are using hive-site.xml from previous hive version(to
use existing metastore?). You might just leave hive-site.xml blank, otherwise.
10 matches
Mail list logo