Hello,
When I want to compile the Spark project, the following error occurs:
milad@pc:~/workspace/source/spark$ build/mvn -DskipTests clean package
Using `mvn` from path: /home/milad/.linuxbrew/bin/mvn
Unrecognized VM option 'MaxPermSize=512M'
Error: Could not create the Java Virtual Machine.
Erro
>
> On Tue, Jan 19, 2016 at 3:42 AM, Milad khajavi wrote:
>
>> Hi Spark users,
>>
>> when I want to map the result of count on groupBy, I need to convert the
>> result to Dataframe, then change the column names and map the result to new
>> case clas
Hi Spark users,
when I want to map the result of count on groupBy, I need to convert the
result to Dataframe, then change the column names and map the result to new
case class, Why Spark Datatset API doesn't have direct functionality?
case class LogRow(id: String, location: String, time: Long)
ca
Here is the same issues:
[1]
http://stackoverflow.com/questions/28186607/java-lang-classcastexception-using-lambda-expressions-in-spark-job-on-remote-ser
[2]
http://mail-archives.apache.org/mod_mbox/spark-user/201501.mbox/%3CCAJUHuJoE7nP6MMOJJKTL6kZtamQ=qhym1aozmezbnetla1y...@mail.gmail.com%3E#ar
Hi all,
I can run spark job pragmatically in j2SE with following code without any error:
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
public class TestSpark {
public static void main() {
String sourceP