Yes, https://issues.apache.org/jira/browse/SPARK-2576 is used to track it.
On Wed, Jul 23, 2014 at 9:11 AM, Nicholas Chammas <
nicholas.cham...@gmail.com> wrote:
> Do we have a JIRA issue to track this? I think I've run into a similar
> issue.
>
>
> On Wed, Jul 23, 2014 at 1:12 AM, Yin Huai wr
Do we have a JIRA issue to track this? I think I've run into a similar
issue.
On Wed, Jul 23, 2014 at 1:12 AM, Yin Huai wrote:
> It is caused by a bug in Spark REPL. I still do not know which part of the
> REPL code causes it... I think people working REPL may have better idea.
>
> Regarding ho
It is caused by a bug in Spark REPL. I still do not know which part of the
REPL code causes it... I think people working REPL may have better idea.
Regarding how I found it, based on exception, it seems we pulled in some
irrelevant stuff and that import was pretty suspicious.
Thanks,
Yin
On Tu
Hi, Yin Huai
I test again with your snippet code.
It works well in spark-1.0.1
Here is my code:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
case class Record(data_date: String, mobile: String, create_time: String)
val mobile = Record("2014-07-20","1234567","2014-07-19
Hi Victor,
Instead of importing sqlContext.createSchemaRDD, can you explicitly call
sqlContext.createSchemaRDD(rdd) to create a SchemaRDD?
For example,
You have a case class Record.
case class Record(data_date: String, mobile: String, create_time: String)
Then, you create a RDD[Record] and let
Hi,Kevin
I tried it on spark1.0.0, it works fine.
It's a bug in spark1.0.1 ...
Thanks,
Victor
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p102
Hi, Victor
I got the same issue and I posted it.
In my case, it only happens when I query some spark-sql on spark 1.0.1 but
for spark 1.0.0, it works properly.
Have you run the same job on spark 1.0.0 ?
Sincerely,
Kevin
--
View this message in context:
http://apache-spark-user-list.1001560.n
Hi, Michael
I only modified the default hadoop version to 0.20.2-cdh3u5, and
DEFAULT_HIVE=true in SparkBuild.scala.
Then sbt/sbt assembly.
I just run in the local standalone mode by using sbin/start-all.sh.
Hadoop version is 0.20.2-cdh3u5.
Then use spark-shell to execute the spark s
Can you tell us more about your environment. Specifically, are you also
running on Mesos?
On Jul 18, 2014 12:39 AM, "Victor Sheng" wrote:
> when I run a query to a hadoop file.
> mobile.registerAsTable("mobile")
> val count = sqlContext.sql("select count(1) from mobile")
> res5: org.apache.spark
Hi,Svend
Your reply is very helpful to me. I'll keep an eye on that ticket.
And also... Cheers :)
Best Regards,
Victor
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize
Hi Victor,
I have the same issue (and no solution, unfortunately... )
I mentioned it in another post => keep an eye on that one in case some
future reply there could help you:
http://apache-spark-user-list.1001560.n3.nabble.com/ClassNotFoundException-line11-read-when-loading-an-HDFS-text-fil
when I run a query to a hadoop file.
mobile.registerAsTable("mobile")
val count = sqlContext.sql("select count(1) from mobile")
res5: org.apache.spark.sql.SchemaRDD =
SchemaRDD[21] at RDD at SchemaRDD.scala:100
== Query Plan ==
ExistingRdd [data_date#0,mobile#1,create_time#2], MapPartitionsRDD[4]
12 matches
Mail list logo