Re: How to make Mesos Cluster Dispatcher of Spark 1.6.1 load my config files?

2016-10-19 Thread Daniel Carroza
Hi Chanh,

I found a workaround that works to me:
http://stackoverflow.com/questions/29552799/spark-unable-to-find-jdbc-driver/40114125#40114125

Regards,
Daniel

El jue., 6 oct. 2016 a las 6:26, Chanh Le (<giaosu...@gmail.com>) escribió:

> Hi everyone,
> I have the same config in both mode and I really want to change config
> whenever I run so I created a config file and run my application with it.
> My problem is:
> It’s works with these config without using Mesos Cluster Dispatcher.
>
> /build/analytics/spark-1.6.1-bin-hadoop2.6/bin/spark-submit \
>
>
>
>
> *--files /build/analytics/kafkajobs/prod.conf \--conf
> 'spark.executor.extraJavaOptions=-Dconfig.fuction.conf' \--conf
> 'spark.driver.extraJavaOptions=-Dconfig.file=/build/analytics/kafkajobs/prod.conf'
> \--conf
> 'spark.driver.extraClassPath=/build/analytics/spark-1.6.1-bin-hadoop2.6/lib/postgresql-9.3-1102.jdbc41.jar'
> \--conf
> 'spark.executor.extraClassPath=/build/analytics/spark-1.6.1-bin-hadoop2.6/lib/postgresql-9.3-1102.jdbc41.jar'
> \*
> --class com.ants.util.kafka.PersistenceData \
>
> *--master mesos://10.199.0.19:5050 \*--executor-memory 5G \
> --driver-memory 2G \
> --total-executor-cores 4 \
> --jars /build/analytics/kafkajobs/spark-streaming-kafka_2.10-1.6.2.jar \
> /build/analytics/kafkajobs/kafkajobs-prod.jar
>
>
> And it’s didn't work with these:
>
> /build/analytics/spark-1.6.1-bin-hadoop2.6/bin/spark-submit \
>
>
>
>
> *--files /build/analytics/kafkajobs/prod.conf \--conf
> 'spark.executor.extraJavaOptions=-Dconfig.fuction.conf' \--conf
> 'spark.driver.extraJavaOptions=-Dconfig.file=/build/analytics/kafkajobs/prod.conf'
> \--conf
> 'spark.driver.extraClassPath=/build/analytics/spark-1.6.1-bin-hadoop2.6/lib/postgresql-9.3-1102.jdbc41.jar'
> \--conf
> 'spark.executor.extraClassPath=/build/analytics/spark-1.6.1-bin-hadoop2.6/lib/postgresql-9.3-1102.jdbc41.jar'
> \*
> --class com.ants.util.kafka.PersistenceData \
>
>
> *--master mesos://10.199.0.19:7077 \--deploy-mode cluster \--supervise \*
> --executor-memory 5G \
> --driver-memory 2G \
> --total-executor-cores 4 \
> --jars /build/analytics/kafkajobs/spark-streaming-kafka_2.10-1.6.2.jar \
> /build/analytics/kafkajobs/kafkajobs-prod.jar
>
> It threw me an error: *Exception in thread "main" java.sql.SQLException:
> No suitable driver found for jdbc:postgresql://psqlhost:5432/kafkajobs*
> which means my —conf didn’t work and those config I put in 
> */build/analytics/kafkajobs/prod.conf
> *wasn’t loaded. It only loaded thing I put in application.conf (default
> config).
>
> How to make MCD load my config?
>
> Regards,
> Chanh
>
> --
Daniel Carroza Santana
Vía de las Dos Castillas, 33, Ática 4, 3ª Planta.
28224 Pozuelo de Alarcón. Madrid.
Tel: +34 91 828 64 73 // *@stratiobd <https://twitter.com/StratioBD>*


Re: Reading from RabbitMq via Apache Spark Streaming

2015-11-19 Thread Daniel Carroza
Hi,

Just answered on Github:
https://github.com/Stratio/rabbitmq-receiver/issues/20

Regards

Daniel Carroza Santana

Vía de las Dos Castillas, 33, Ática 4, 3ª Planta.
28224 Pozuelo de Alarcón. Madrid.
Tel: +34 91 828 64 73 // *@stratiobd <https://twitter.com/StratioBD>*

2015-11-19 10:02 GMT+01:00 D <subharaj.ma...@gmail.com>:

> I am trying to write a simple "Hello World" kind of application using
> spark streaming and RabbitMq, in which Apache Spark Streaming will read
> message from RabbitMq via the RabbitMqReceiver
> <https://github.com/Stratio/rabbitmq-receiver> and print it in the
> console. But some how I am not able to print the string read from Rabbit Mq
> into console. The spark streaming code is printing the message below:-
>
> Value Received BlockRDD[1] at ReceiverInputDStream at 
> RabbitMQInputDStream.scala:33
> Value Received BlockRDD[2] at ReceiverInputDStream at 
> RabbitMQInputDStream.scala:33
>
>
> The message is sent to the rabbitmq via the simple code below:-
>
> package helloWorld;
>
> import com.rabbitmq.client.Channel;
> import com.rabbitmq.client.Connection;
> import com.rabbitmq.client.ConnectionFactory;
>
> public class Send {
>
>   private final static String QUEUE_NAME = "hello1";
>
>   public static void main(String[] argv) throws Exception {
> ConnectionFactory factory = new ConnectionFactory();
> factory.setHost("localhost");
> Connection connection = factory.newConnection();
> Channel channel = connection.createChannel();
>
> channel.queueDeclare(QUEUE_NAME, false, false, false, null);
> String message = "Hello World! is a code. Hi Hello World!";
> channel.basicPublish("", QUEUE_NAME, null, message.getBytes("UTF-8"));
> System.out.println(" [x] Sent '" + message + "'");
>
> channel.close();
> connection.close();
>   }
> }
>
>
> I am trying to read messages via Apache Streaming as shown below:-
>
> package rabbitmq.example;
>
> import java.util.*;
>
> import org.apache.spark.SparkConf;
> import org.apache.spark.api.java.JavaRDD;
> import org.apache.spark.api.java.function.Function;
> import org.apache.spark.streaming.Durations;
> import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
> import org.apache.spark.streaming.api.java.JavaStreamingContext;
>
> import com.stratio.receiver.RabbitMQUtils;
>
> public class RabbitMqEx {
>
> public static void main(String[] args) {
> System.out.println("CreatingSpark   Configuration");
> SparkConf conf = new SparkConf();
> conf.setAppName("RabbitMq Receiver Example");
> conf.setMaster("local[2]");
>
> System.out.println("Retreiving  Streaming   Context fromSpark   
> Conf");
> JavaStreamingContext streamCtx = new JavaStreamingContext(conf,
> Durations.seconds(2));
>
> Map<String, String>rabbitMqConParams = new HashMap<String, String>();
> rabbitMqConParams.put("host", "localhost");
> rabbitMqConParams.put("queueName", "hello1");
> System.out.println("Trying to connect to RabbitMq");
> JavaReceiverInputDStream receiverStream = 
> RabbitMQUtils.createJavaStream(streamCtx, rabbitMqConParams);
> receiverStream.foreachRDD(new Function<JavaRDD, Void>() {
> @Override
> public Void call(JavaRDD arg0) throws Exception {
> System.out.println("Value Received " + arg0.toString());
> return null;
> }
> } );
> streamCtx.start();
> streamCtx.awaitTermination();
> }
> }
>
> The output console only has message like the following:-
>
> CreatingSpark   Configuration
> Retreiving  Streaming   Context fromSpark   Conf
> Trying to connect to RabbitMq
> Value Received BlockRDD[1] at ReceiverInputDStream at 
> RabbitMQInputDStream.scala:33
> Value Received BlockRDD[2] at ReceiverInputDStream at 
> RabbitMQInputDStream.scala:33
>
>
> In the logs I see the following:-
>
> 15/11/18 13:20:45 INFO SparkContext: Running Spark version 1.5.2
> 15/11/18 13:20:45 WARN NativeCodeLoader: Unable to load native-hadoop library 
> for your platform... using builtin-java classes where applicable
> 15/11/18 13:20:45 WARN Utils: Your hostname, jabong1143 resolves to a 
> loopback address: 127.0.1.1; using 192.168.1.3 instead (on interface wlan0)
> 15/11/18 13:20:45 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to 
> another address
> 15/11/18 13:20:45 INFO Sec