Hi TD,
Here's some information:

1. Cluster has one standalone master, 4 workers. Workers are co-hosted with 
Apache Cassandra. Master is set up with external Zookeeper.
2. Each machine has 2 cores and 4GB of ram. This is for testing. All machines 
are vmware vms. Spark has 2GB dedicated to it on each node.
3. In addition to the streaming details, the master details as of now are given 
below. Only the streaming app is running.
4. I'm listening to two rabbitmq queues using a rabbitmq receiver (code: 
https://gist.github.com/ashic/b5edc7cfdc85aa60b066 ). Notifier code is here 
https://gist.github.com/ashic/9abd352c691eafc8c9f3 
5. The receivers are initialised with the following code:
val ssc = new StreamingContext(sc, Seconds(2))
val messages1 = ssc.receiverStream(new RmqReceiver("abc", "abc", "/", 
"vdclog03", "abc_input", None))
val messages2 = ssc.receiverStream(new RmqReceiver("abc", "abc", "/", 
"vdclog04", "abc_input", None))
val messages = messages1.union(messages2)
val notifier = new RabbitMQEventNotifier("vdclog03", "abc", 
"abc_output_events", "abc", "abc", "/")

6. Usage:

  messages.map(x => ScalaMessagePack.read[RadioMessage](x))
  .flatMap(InputMessageParser.parse(_).getEvents())
  .foreachRDD(x => {
      x.foreachPartition(x => {
        cassandraConnector.withSessionDo(session =>{
          val graphStorage = new CassandraGraphStorage(session)
          val notificationStorage = new CassandraNotificationStorage(session)
          val savingNotifier = new SavingNotifier(notifier, notificationStorage)

          x.foreach(eventWrapper => eventWrapper.event match {
            //do some queries.
            // save some stuff if needed to cassandra
            //raise a message to a separate queue with a msg => Unit() 
operation.

7. The algorithm is simple: listen to messages from two separate rmq queues. 
union them. for each message, check message properties. 
if needed, query cassandra for additional details (graph search..but done in 
0.5-3 seconds...and rare..shouldn't overwhelm with low input rate).
If needed, save some info back into cassandra (1-2ms), and raise an event to 
the notifier.

I'm probably missing something basic, just wondering what. It has been running 
fine for about 42 hours now, but the numbers are a tad worrying.
            
Cheers,
Ashic.


Workers: 4Cores: 8 Total, 4 UsedMemory: 8.0 GB Total, 2000.0 MB 
UsedApplications: 1 Running, 0 CompletedDrivers: 0 Running, 0 CompletedStatus: 
ALIVEWorkersIdAddressStateCoresMemoryworker-20141208131918-VDCAPP50.AAA.local-44476VDCAPP50.AAA.local:44476ALIVE2
 (1 Used)2.0 GB (500.0 MB 
Used)worker-20141208132012-VDCAPP52.AAA.local-34349VDCAPP52.AAA.local:34349ALIVE2
 (1 Used)2.0 GB (500.0 MB 
Used)worker-20141208132136-VDCAPP53.AAA.local-54000VDCAPP53.AAA.local:54000ALIVE2
 (1 Used)2.0 GB (500.0 MB 
Used)worker-20141211111627-VDCAPP49.AAA.local-57899VDCAPP49.AAA.local:57899ALIVE2
 (1 Used)2.0 GB (500.0 MB Used)Running ApplicationsIDNameCoresMemory per 
NodeSubmitted TimeUserStateDurationapp-20150120165844-0005App1
4500.0 MB2015/01/20 16:58:44rootWAITING42.4 h

From: tathagata.das1...@gmail.com
Date: Thu, 22 Jan 2015 03:15:58 -0800
Subject: Re: Are these numbers abnormal for spark streaming?
To: as...@live.com; t...@databricks.com
CC: user@spark.apache.org

This is not normal. Its a huge scheduling delay!! Can you tell me more about 
the application?- cluser setup, number of receivers, whats the computation, etc.
On Thu, Jan 22, 2015 at 3:11 AM, Ashic Mahtab <as...@live.com> wrote:



Hate to do this...but...erm...bump? Would really appreciate input from others 
using Streaming. Or at least some docs that would tell me if these are expected 
or not.

From: as...@live.com
To: user@spark.apache.org
Subject: Are these numbers abnormal for spark streaming?
Date: Wed, 21 Jan 2015 11:26:31 +0000




Hi Guys,
I've got Spark Streaming set up for a low data rate system (using spark's 
features for analysis, rather than high throughput). Messages are coming in 
throughout the day, at around 1-20 per second (finger in the air estimate...not 
analysed yet).  In the spark streaming UI for the application, I'm getting the 
following after 17 hours.

StreamingStarted at: Tue Jan 20 16:58:43 GMT 2015Time since start: 18 hours 24 
minutes 34 secondsNetwork receivers: 2Batch interval: 2 secondsProcessed 
batches: 16482Waiting batches: 1

Statistics over last 100 processed batchesReceiver 
StatisticsReceiverStatusLocationRecords in last batch[2015/01/21 
11:23:18]Minimum rate[records/sec]Median rate[records/sec]Maximum 
rate[records/sec]Last ErrorRmqReceiver-0ACTIVEFOOOO
144727-RmqReceiver-1ACTIVEBAAAAR
124726-Batch Processing StatisticsMetricLast batchMinimum25th 
percentileMedian75th percentileMaximumProcessing Time3 seconds 994 ms157 ms4 
seconds 16 ms4 seconds 961 ms5 seconds 3 ms5 seconds 171 msScheduling Delay9 
hours 15 minutes 4 seconds9 hours 10 minutes 54 seconds9 hours 11 minutes 56 
seconds9 hours 12 minutes 57 seconds9 hours 14 minutes 5 seconds9 hours 15 
minutes 4 secondsTotal Delay9 hours 15 minutes 8 seconds9 hours 10 minutes 58 
seconds9 hours 12 minutes9 hours 13 minutes 2 seconds9 hours 14 minutes 10 
seconds9 hours 15 minutes 8 seconds
Are these "normal". I was wondering what the scheduling delay and total delay 
terms are, and if it's normal for them to be 9 hours.

I've got a standalone spark master and 4 spark nodes. The streaming app has 
been given 4 cores, and it's using 1 core per worker node. The streaming app is 
submitted from a 5th machine, and that machine has nothing but the driver 
running. The worker nodes are running alongside Cassandra (and reading and 
writing to it).

Any insights would be appreciated.

Regards,
Ashic.
                                                                                
  

                                          

Reply via email to