Whether I use 1 or 2 machines, the results are the same... Here follows the
results I got using 1 and 2 receivers with 2 machines:
2 machines, 1 receiver:
sbt/sbt "run-main Benchmark 1 machine1 1000" 2>&1 | grep -i "Total
delay\|record"
15/04/13 16:41:34 INFO JobScheduler: Total delay: 0.15
Are you running # of receivers = # machines?
TD
On Thu, Apr 9, 2015 at 9:56 AM, Saiph Kappa wrote:
> Sorry, I was getting those errors because my workload was not sustainable.
>
> However, I noticed that, by just running the spark-streaming-benchmark (
> https://github.com/tdas/spark-streaming-
Sorry, I was getting those errors because my workload was not sustainable.
However, I noticed that, by just running the spark-streaming-benchmark (
https://github.com/tdas/spark-streaming-benchmark/blob/master/Benchmark.scala
), I get no difference on the execution time, number of processed record
If it is deterministically reproducible, could you generate full DEBUG
level logs, from the driver and the workers and give it to me? Basically I
want to trace through what is happening to the block that is not being
found.
And can you tell what Cluster manager are you using? Spark Standalone,
Meso
Hi,
I am just running this simple example with
machineA: 1 master + 1 worker
machineB: 1 worker
«
val ssc = new StreamingContext(sparkConf, Duration(1000))
val rawStreams = (1 to numStreams).map(_
=>ssc.rawSocketStream[String](host, port,
StorageLevel.MEMORY_ONLY_SER)).toArray
val uni