For our use case we need to implement DROOLS, there is no guides or tutorial,
How we can implement DROOLS in spark? If someone has some idea how we can
setup please guide me..
Thanks in Advance.
Regards,
Vishnu
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.co
Examine the output (replace $YARN_APP_ID in the following with the
"application identifier" output by the previous command) (Note:
YARN_APP_LOGS_DIR is usually /tmp/logs or $HADOOP_HOME/logs/userlogs
depending on the Hadoop version.)
$ cat $YARN_APP_LOGS_DIR/$YARN_APP_ID/container*_01/stdout.
When I execute the following in yarn-client mode its working fine and giving
the result properly, but when i try to run in Yarn-cluster mode i am getting
error
spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client
/home/rug885/spark/examples/lib/spark-examples_2.10-1.0.0-cdh5
I am newbie to scala and spark. I am joining two datasets , first one coming
from stream and second one which is in HDFS.
I am using scala in spark. After joining the two datasets , I need to apply
filter on the joined datasets, but here I am facing as issue. Please assist
to resolve.
I am using