unsubsribe
ook called
Meanwhile, I will prefer to use maven to compile the jar file rather than sbt,
although it is indeed another option.
Best regards,
Jack
From: Fengdong Yu [mailto:fengdo...@everstring.com]
Sent: Wednesday, 18 November 2015 7:30 PM
To: Jack Yang
Cc: Ted Yu; user@spark.apache.org
S
SQLContext only implements a subset of the SQL function, not included the
window function.
In HiveContext it is fine though.
From: Stephen Boesch [mailto:java...@gmail.com]
Sent: Thursday, 19 November 2015 3:01 PM
To: Michael Armbrust
Cc: Jack Yang; user
Subject: Re: Do windowing functions
Which version of spark are you using?
From: Stephen Boesch [mailto:java...@gmail.com]
Sent: Thursday, 19 November 2015 2:12 PM
To: user
Subject: Do windowing functions require hive support?
The following works against a hive table from spark sql
hc.sql("select id,r from (select id, name,
is they cannot find the
Class, but with “compiled” the error is IncompatibleClassChangeError.
Ok, so can someone tell me which version of breeze and breeze-math are used in
spark 1.4?
From: Zhiliang Zhu [mailto:zchl.j...@yahoo.com]
Sent: Thursday, 19 November 2015 5:10 PM
To: Ted Yu
Cc: Jack Yang; Fengdong
oader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 10 more
15/11/18 17:15:15 INFO util.Utils: Shutdown hook called
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Wednesday, 18 November 2015 4:01 PM
To: Jack Yang
Cc: user@spark.apache.org
Subject:
Hi all,
I am saving some hive- query results into the local directory:
val hdfsFilePath = "hdfs://master:ip/ tempFile ";
val localFilePath = "file:///home/hduser/tempFile";
hiveContext.sql(s"""my hql codes here""")
res.printSchema() --working
res.show() --working
res.map{ x => tranRow2Str(x)
Yes. My one is 1.4.0.
Then is this problem to do with the version?
I doubt that. Any comments please?
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Wednesday, 4 November 2015 11:52 AM
To: Jack Yang
Cc: user@spark.apache.org
Subject: Re: error with saveAsTextFile in local directory
Looks
September 2015 12:27 AM
To: Jack Yang
Cc: Ted Yu; Andy Huang; user@spark.apache.org
Subject: Re: No space left on device when running graphx job
Would you mind sharing what your solution was? It would help those on the forum
who might run into the same problem. Even it it’s a silly ‘gotcha
Hi folk,
I have an issue of graphx. (spark: 1.4.0 + 4 machines + 4G memory + 4 CPU cores)
Basically, I load data using GraphLoader.edgeListFile mthod and then count
number of nodes using: graph.vertices.count() method.
The problem is :
Lost task 11972.0 in stage 6.0 (TID 54585, 192.168.70.129):
Hi all,
I resolved the problems.
Thanks folk.
Jack
From: Jack Yang [mailto:j...@uow.edu.au]
Sent: Friday, 25 September 2015 9:57 AM
To: Ted Yu; Andy Huang
Cc: user@spark.apache.org
Subject: RE: No space left on device when running graphx job
Also, please see the screenshot below from spark web
Hi all,
I have questions with regarding to the log file directory.
That say if I run spark-submit --master local[4], where is the log file?
Then how about if I run standalone spark-submit --master
spark://mymaster:7077?
Best regards,
Jack
:
sqlContext.sql(sinsert into Table newStu select * from otherStu)
that works.
Is there any document addressing that?
Best regards,
Jack
From: Terry Hole [mailto:hujie.ea...@gmail.com]
Sent: Tuesday, 21 July 2015 4:17 PM
To: Jack Yang; user@spark.apache.org
Subject: Re: standalone to connect mysql
, at 9:21 pm, Jack Yang
j...@uow.edu.aumailto:j...@uow.edu.au wrote:
No. I did not use hiveContext at this stage.
I am talking the embedded SQL syntax for pure spark sql.
Thanks, mate.
On 21 Jul 2015, at 6:13 pm, Terry Hole
hujie.ea...@gmail.commailto:hujie.ea...@gmail.com wrote:
Jack,
You can
July 2015 4:17 PM
To: Jack Yang; user@spark.apache.orgmailto:user@spark.apache.org
Subject: Re: standalone to connect mysql
Maybe you can try: spark-submit --class sparkwithscala.SqlApp --jars
/home/lib/mysql-connector-java-5.1.34.jar --master spark://hadoop1:7077
/home/myjar.jar
Thanks!
-Terry
Hi
Hi there,
I would like to use spark to access the data in mysql. So firstly I tried to
run the program using:
spark-submit --class sparkwithscala.SqlApp --driver-class-path
/home/lib/mysql-connector-java-5.1.34.jar --master local[4] /home/myjar.jar
that returns me the correct results. Then I
Hi there,
I got an error when running one simple graphX program.
My setting is: spark 1.4.0, Hadoop yarn 2.5. scala 2.10. with four virtual
machines.
if I constructed one small graph (6 nodes, 4 edges), I run:
println(triangleCount: %s .format(
hdfs_graph.triangleCount().vertices.count() ))
So this is a bug unsolved (for java) yet?
From: Jack Yang [mailto:j...@uow.edu.au]
Sent: Friday, 18 July 2014 4:52 PM
To: user@spark.apache.org
Subject: error from DecisonTree Training:
Hi All,
I got an error while using DecisionTreeModel (my program is written in Java,
spark 1.0.1, scala
is working on it.
-Xiangrui
On Mon, Jul 21, 2014 at 4:20 PM, Jack Yang j...@uow.edu.au wrote:
So this is a bug unsolved (for java) yet?
From: Jack Yang [mailto:j...@uow.edu.au]
Sent: Friday, 18 July 2014 4:52 PM
To: user@spark.apache.org
Subject: error from DecisonTree Training:
Hi All
Hi All,
I got an error while using DecisionTreeModel (my program is written in Java,
spark 1.0.1, scala 2.10.1).
I have read a local file, loaded it as RDD, and then sent to decisionTree for
training. See below for details:
JavaRDDLabeledPoint Points = lines.map(new ParsePoint()).cache();
20 matches
Mail list logo