Thanks for your reply.
I invoked my program with the broker ip and host and it triggered as expected
but I see the below error
./bin/spark-submit --class org.stream.processing.JavaKafkaStreamEventProcessing
--master local spark-stream-processing-0.0.1-SNAPSHOT-jar-with-dependencies.jar
Hi all,
I am trying to work with spark-redis connector (redislabs) which
requires all transactions between redis and spark be in RDD's. The language
I am using is Java but the connector does not accept JavaRDD's .So I tried
using Spark context in my code instead of JavaSparkContext. But
Thanks, Sean!
Sean Owen wrote on 09/25/2015 06:35:46 AM:
> From: Sean Owen
> To: Reynold Xin , Richard Hillegas/San
> Francisco/IBM@IBMUS
> Cc: "dev@spark.apache.org"
> Date: 09/25/2015 07:21 PM
> Subject: Re:
This is a user list question not a dev list question.
Looks like your driver is having trouble communicating to the kafka
brokers. Make sure the broker host and port is available from the driver
host (using nc or telnet); make sure that you're providing the _broker_
host and port to
I tried to run HdfsTest sample on windows spark-1.4.0
bin\run-sample org.apache.spark.examples.HdfsTest
but got below exception, any body any idea what was wrong here?
15/09/28 16:33:56.565 ERROR SparkContext: Error initializing SparkContext.
java.lang.NullPointerException
at
Hi,
Could someone recommend the monitoring tools for spark streaming?
By extending StreamingListener we can dump the delay in processing of
batches and some alert messages.
But are there any Web UI tools where we can monitor failures, see delays in
processing, error messages and setup alerts
What version of hadoop are you using ?
Is that version consistent with the one which was used to build Spark 1.4.0
?
Cheers
On Mon, Sep 28, 2015 at 4:36 PM, Renyi Xiong wrote:
> I tried to run HdfsTest sample on windows spark-1.4.0
>
> bin\run-sample
Hello all,
Goal: I want to use APIs from HttpClient library 4.4.1. I am using maven
shaded plugin to generate JAR.
Findings: When I run my program as a java application within eclipse everything
works fine. But when I am running the program using spark-submit I am getting
following
The effects of changing the pom.xml extend beyond cases in which we wish to
modify spark itself. In addition when git pull'ing from trunk we need to
either stash or roll back the changes before rebase'ing.
An effort to look into a better solution (possibly including evaluating Ted
Yu's suggested
+1
1) Build binary instruction: ./make-distribution.sh --tgz --skip-java-test
-Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver
-DskipTests
2) Run Spark SQL with YARN client mode
This 1.5.1 RC1 package have better test results than previous 1.5.0 except
for
Hi Spark Developers,
The Spark 1.5.1 documentation is already publicly accessible (
https://spark.apache.org/docs/latest/index.html) but the release is not. Is
it intentional?
Best Regards,
Jerry
On Mon, Sep 28, 2015 at 9:21 AM, james wrote:
> +1
>
> 1) Build binary
It's on Maven Central already. These various updates have to happen in
some order, and you'll probably see an inconsistent state for a day or
so while things get slowly updated. Consider it released when there's
an announcement, I suppose.
On Mon, Sep 28, 2015 at 11:07 PM, Jerry Lam
12 matches
Mail list logo