How to deploy my java code which invokes Spark in Tomcat?

2014-12-20 Thread Tao Lu
Hi, Guys, I have some code which runs will using Spark-Submit command. $SPARK_HOME/bin/spark-submit --class com.myorg.service.SparkService ./Search.jar How can I deploy it to Tomcat? If I simply deploy the jar file, I will get ClassNotFound error. Thanks!

Console log file of CoarseGrainedExecutorBackend

2015-07-16 Thread Tao Lu
Hi, Guys, Where can I find the console log file of CoarseGrainedExecutorBackend process? Thanks! Tao

Can RDD be shared accross the cluster by other drivers?

2015-08-26 Thread Tao Lu
Hi, Guys, Is it possible that RDD created by driver A be used driver B? Thanks!

Re: Hbase Lookup

2015-09-03 Thread Tao Lu
Yes. Ayan, you approach will work. Or alternatively, use Spark, and write a Scala/Java function which implements similar logic in your Pig UDF. Both approaches look similar. Personally, I would go with Spark solution, it will be slightly faster, and easier if you already have Spark cluster

Re: Hbase Lookup

2015-09-03 Thread Tao Lu
Otherwise you pay > more money for maintenance and development. > > Le jeu. 3 sept. 2015 à 17:16, Tao Lu <taolu2...@gmail.com> a écrit : > >> Yes. Ayan, you approach will work. >> >> Or alternatively, use Spark, and write a Scala/Java function which >> implemen

Re: Small File to HDFS

2015-09-03 Thread Tao Lu
Your requirements conflict with each other. 1. You want to dump all your messages somewhere 2. You want to be able to update/delete individual message 3. You don't want to introduce anther NOSQL database(like HBASE) since you already have all messages stored in MongoDB My suggestion is: 1. Don't

Re: How to Take the whole file as a partition

2015-09-03 Thread Tao Lu
You situation is special. It seems to me Spark may not fit well in your case. You want to process the individual files (500M~2G) as a whole, you want good performance. You may want to write our own Scala/Java programs and distribute it along with those files across your cluster, and run them in

Re: Small File to HDFS

2015-09-02 Thread Tao Lu
You may consider storing it in one big HDFS file, and to keep appending new messages to it. For instance, one message -> zip it -> append it to the HDFS as one line On Wed, Sep 2, 2015 at 12:43 PM, wrote: > Hi, > I already store them in MongoDB in parralel for operational

Re: Small File to HDFS

2015-09-04 Thread Tao Lu
Basically they need NOSQL like random update access. On Fri, Sep 4, 2015 at 9:56 AM, Ted Yu wrote: > What about concurrent access (read / update) to the small file with same > key ? > > That can get a bit tricky. > > On Thu, Sep 3, 2015 at 2:47 PM, Jörn Franke

Unsubscribe

2016-12-08 Thread Tao Lu

unsubscribe

2017-07-27 Thread Tao Lu
unsubscribe

Unsubscribe

2017-06-21 Thread Tao Lu
Unsubscribe