Re: flume spark streaming receiver host random

2014-09-26 Thread centerqi hu
the receiver is not running on the machine I expect 2014-09-26 14:09 GMT+08:00 Sean Owen so...@cloudera.com: I think you may be missing a key word here. Are you saying that the machine has multiple interfaces and it is not using the one you expect or the receiver is not running on the

flume spark streaming receiver host random

2014-09-25 Thread centerqi hu
Hi all My code is as follows: /usr/local/webserver/sparkhive/bin/spark-submit --class org.apache.spark.examples.streaming.FlumeEventCount --master yarn --deploy-mode cluster --queue online --num-executors 5 --driver-memory 6g --executor-memory 20g --executor-cores 5

Re: Unsupported language features in query

2014-09-02 Thread centerqi hu
: centerqi hu [mailto:cente...@gmail.com] Sent: Tuesday, September 02, 2014 3:35 PM To: user@spark.apache.org Subject: Unsupported language features in query hql(CREATE TABLE tmp_adclick_gm_all ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' as SELECT SUM(uv) as uv, round(SUM

save schemardd to hive

2014-09-02 Thread centerqi hu
I want to save schemardd to hive val usermeta = hql( SELECT userid,idlist from usermeta WHERE day='2014-08-01' limit 1000) case class SomeClass(name:String,idlist:String) val schemardd = usermeta.map(t={SomeClass(t(0).toString,t(1).toString)}) How to save schemardd to hive? Thanks --

Re: save schemardd to hive

2014-09-02 Thread centerqi hu
,idlist:String) val scmm = usermeta.map(t={SomeClass(t(0).toString,t(1).toString +id)}) val good = createSchemaRDD(scmm) good.saveAsTable(meta_test) 2014-09-02 17:50 GMT+08:00 centerqi hu cente...@gmail.com: I want to save schemardd to hive val usermeta = hql( SELECT userid,idlist from

spark on yarn with hive

2014-08-30 Thread centerqi hu
I want to let hive run on spark and yarn clusters,Hive Metastore is stored in MySQL I compiled spark code: sh make-distribution.sh --hadoop 2.4.1 --with-yarn --skpi-java-test --tgz --with-hive My HQL code: import org.apache.spark.{SparkConf, SparkContext} import org.apache.spark.sql._ import

hive on spark yarn

2014-08-27 Thread centerqi hu
Hi all When I run a simple SQL, encountered the following error. hive:0.12(metastore in mysql) hadoop 2.4.1 spark 1.0.2 build with hive my hql code import org.apache.spark.{SparkConf, SparkContext} import org.apache.spark.sql._ import org.apache.spark.sql.hive.LocalHiveContext object

Re: Spark memory settings on yarn

2014-08-21 Thread centerqi hu
of ApplicationMaster, ExecutableRunner or CoarseGrainedSchedulerBackend, not org.apache.hadoop.mapred.YarnChild. On Wed, Aug 20, 2014 at 6:56 PM, centerqi hu cente...@gmail.com wrote: Spark memory settings let me very misunderstanding. My code is as follows. spark-1.0.2-bin-2.4.1/bin/spark-submit

Spark memory settings on yarn

2014-08-20 Thread centerqi hu
Spark memory settings let me very misunderstanding. My code is as follows. spark-1.0.2-bin-2.4.1/bin/spark-submit --class SimpleApp \ --master yarn \ --deploy-mode cluster \ --queue sls_queue_1 \ --num-executors 3 \ --driver-memory 6g \ --executor-memory 10g \ --executor-cores 5 \

spark on yarn cluster can't launch

2014-08-15 Thread centerqi hu
The code does not run as follows ../bin/spark-submit --class org.apache.spark.examples.SparkPi \ --master yarn \ --deploy-mode cluster \ --verbose \ --num-executors 3 \ --driver-memory 4g \ --executor-memory 2g \ --executor-cores 1 \ ../lib/spark-examples*.jar \ 100 Exception in thread