Hi all,
I am trying to trap UI kill event of a spark application from driver.
Some how the exception thrown is not propagated to the driver main
program. See for example using spark-shell below.
Is there a way to get hold of this event and shutdown the driver program?
Regards,
Noorul
Sending plain text mail to test whether my mail appear in the list.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/This-is-a-test-mail-please-ignore-tp28538.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
A better forum would be
https://groups.google.com/forum/#!forum/spark-jobserver
or
https://gitter.im/spark-jobserver/spark-jobserver
Regards,
Noorul
Madabhattula Rajesh Kumar writes:
> Hi,
>
> I am getting below an exception when I start the job-server
>
>
> When Initial jobs have not accepted any resources then what all can be
> wrong? Going through stackoverflow and various blogs does not help. Maybe
> need better logging for this? Adding dev
>
Did you take a look at the spark UI to see your resource availability?
Thanks and Regards
Noorul
Hi all,
I have a streaming application with batch interval 10 seconds.
val sparkConf = new SparkConf().setAppName("RMQWordCount")
.set("spark.streaming.stopGracefullyOnShutdown", "true")
val ssc = new StreamingContext(sparkConf, Seconds(10))
I also use reduceByKeyAndWindow() API
Reza zade writes:
> Hi
>
> I have set up a cloudera cluster and work with spark. I want to install
> spark-jobserver on it. What should I do?
Maybe you should send this to spark-jobserver mailing list.
https://github.com/spark-jobserver/spark-jobserver#contact
Thanks and
kalkimann writes:
> Hi,
> spark 1.6.2 is the latest brew package i can find.
> spark 2.0.x brew package is missing, best i know.
>
> Is there a schedule when spark-2.0 will be available for "brew install"?
>
Did you do a 'brew update' before searching. I installed
Hi all,
I was trying to test --supervise flag of spark-submit.
The documentation [1] says that, the flag helps in restarting your
application automatically if it exited with non-zero exit code.
I am looking for some clarification on that documentation. In this
context, does application means
Spark version: 1.6.1
Cluster Manager: Standalone
I am experimenting with cluster mode deployment along with supervise for
high availability of streaming applications.
1. Submit a streaming job in cluster mode with supervise
2. Say that driver is scheduled on worker1. The app started
Hi all,
I am trying to copy data from one cassandra cluster to another using
spark + cassandra connector. At the source I have around 200 GB of data
But while running the spark stage shows output as 406 GB and the data is
still getting copied. I wonder why is it showing this high a number.
carlilek writes:
> My users use Spark 1.5.1 in standalone mode on an HPC cluster, with a
> smattering still using 1.4.0
>
> I have been getting reports of errors like this:
>
> 15/12/21 15:40:33 ERROR FileAppender: Error writing stream to file
>
Are you using DSE spark, if so are you pointing spark job server to use DSE
spark?
Thanks and Regards
Noorul
Anand anand.vi...@monotype.com writes:
*I am new to Spark world and Job Server
My Code :*
package spark.jobserver
import java.nio.ByteBuffer
import
mas mas.ha...@gmail.com writes:
Hi all!
I am trying to install spark on my standalone machine. I am able to run the
master but when i try to run the slaves it gives me following error. Any
help in this regard will highly be appreciated.
Sandy Ryza sandy.r...@cloudera.com writes:
Creating a SparkContext and setting master as yarn-cluster unfortunately
will not work.
SPARK-4924 added APIs for doing this in Spark, but won't be included until
1.4.
-Sandy
Did you look into something like [1]? With that you can make rest API
had other performance issues with spark
cassandra connector.
Thanks and Regards
Noorul
On Thu, Mar 26, 2015 at 1:13 PM, Noorul Islam K M noo...@noorul.com wrote:
sparkx y...@yang-cs.com writes:
Hi,
I have a Spark job and a dataset of 0.5 Million items. Each item performs
some sort
sparkx y...@yang-cs.com writes:
Hi,
I have a Spark job and a dataset of 0.5 Million items. Each item performs
some sort of computation (joining a shared external dataset, if that does
matter) and produces an RDD containing 20-500 result items. Now I would like
to combine all these RDDs and
-you-run-your-spark-app-tp7935p7958.html
Noorul Islam K M noo...@noorul.com writes:
Hi all,
We have a cloud application, to which we are adding a reporting service.
For this we have narrowed down to use Cassandra + Spark for data store
and processing respectively.
Since cloud application
Hi all,
We have a cloud application, to which we are adding a reporting service.
For this we have narrowed down to use Cassandra + Spark for data store
and processing respectively.
Since cloud application is separate from Cassandra + Spark deployment,
what is ideal method to interact with Spark
18 matches
Mail list logo