Thank, Nick.
This worked for me.
val evaluator = new BinaryClassificationEvaluator().
setLabelCol("label").
setRawPredictionCol("ModelProbability").
setMetricName("areaUnderROC")
val auROC = evaluator.evaluate(testResults)
On
Typically you pass in the result of a model transform to the evaluator.
So:
val model = estimator.fit(data)
val auc = evaluator.evaluate(model.transform(testData)
Check Scala API docs for some details:
Can you please suggest how I can use BinaryClassificationEvaluator? I tried:
scala> import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
scala> val evaluator = new BinaryClassificationEvaluator()
evaluator:
DataFrame.rdd returns an RDD[Row]. You'll need to use map to extract the
doubles from the test score and label DF.
But you may prefer to just use spark.ml evaluators, which work with
DataFrames. Try BinaryClassificationEvaluator.
On Mon, 14 Nov 2016 at 19:30, Bhaarat Sharma
: Mekal Zheng <mekal.zh...@gmail.com> <mekal.zh...@gmail.com>, spark users
<user@spark.apache.org> <user@spark.apache.org>
主题: Re: scala.MatchError on stand-alone cluster mode
Hi Mekal,
It may be a scala version mismatch error,kindly check whether you are
running both (
The error stack is throwing from your code:
Caused by: scala.MatchError: [Ljava.lang.String;@68d279ec (of class
[Ljava.lang.String;)
at com.jd.deeplog.LogAggregator$.main(LogAggregator.scala:29)
at com.jd.deeplog.LogAggregator.main(LogAggregator.scala)
I think you should debug
For more details on my question
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-generate-Java-bean-class-for-avro-files-using-spark-avro-project-tp22413.html
Thanks,
Yamini
On Tue, Apr 7, 2015 at 2:23 PM, Yamini Maddirala yamini.m...@gmail.com
wrote:
Hi Michael,
Yes, I did try
Have you looked at spark-avro?
https://github.com/databricks/spark-avro
On Tue, Apr 7, 2015 at 3:57 AM, Yamini yamini.m...@gmail.com wrote:
Using spark(1.2) streaming to read avro schema based topics flowing in
kafka
and then using spark sql context to register data as temp table. Avro maven
Hi Michael,
Yes, I did try spark-avro 0.2.0 databricks project. I am using CHD5.3 which
is based on spark 1.2. Hence I'm bound to use spark-avro 0.2.0 instead of
the latest.
I'm not sure how spark-avro project can help me in this scenario.
1. I have JavaDStream of type avro generic record
All values in Hive are always nullable, though you should still not be
seeing this error.
It should be addressed by this patch:
https://github.com/apache/spark/pull/3150
On Fri, Dec 5, 2014 at 2:36 AM, Hao Ren inv...@gmail.com wrote:
Hi,
I am using SparkSQL on 1.1.0 branch.
The following
)
case class Instrument(issue: Issue = null)
-Naveen
From: Michael Armbrust [mailto:mich...@databricks.com]
Sent: Wednesday, November 12, 2014 12:09 AM
To: Xiangrui Meng
Cc: Naveen Kumar Pokala; user@spark.apache.org
Subject: Re: scala.MatchError
Xiangrui is correct that is must be a java bean
I think you need a Java bean class instead of a normal class. See
example here: http://spark.apache.org/docs/1.1.0/sql-programming-guide.html
(switch to the java tab). -Xiangrui
On Tue, Nov 11, 2014 at 7:18 AM, Naveen Kumar Pokala
npok...@spcapitaliq.com wrote:
Hi,
This is my Instrument java
Xiangrui is correct that is must be a java bean, also nested classes are
not yet supported in java.
On Tue, Nov 11, 2014 at 10:11 AM, Xiangrui Meng men...@gmail.com wrote:
I think you need a Java bean class instead of a normal class. See
example here:
Can you provide the exception stack?
Thanks,
Daoyuan
From: Ge, Yao (Y.) [mailto:y...@ford.com]
Sent: Sunday, October 19, 2014 10:17 PM
To: user@spark.apache.org
Subject: scala.MatchError: class java.sql.Timestamp
I am working with Spark 1.1.0 and I believe Timestamp is a supported data type
(RemoteTestRunner.java:197)
From: Wang, Daoyuan [mailto:daoyuan.w...@intel.com]
Sent: Sunday, October 19, 2014 10:31 AM
To: Ge, Yao (Y.); user@spark.apache.org
Subject: RE: scala.MatchError: class java.sql.Timestamp
Can you provide the exception stack?
Thanks,
Daoyuan
From: Ge, Yao (Y.) [mailto:y...@ford.com
Seems bugs in the JavaSQLContext.getSchema(), which doesn't enumerate all of
the data types supported by Catalyst.
From: Ge, Yao (Y.) [mailto:y...@ford.com]
Sent: Sunday, October 19, 2014 11:44 PM
To: Wang, Daoyuan; user@spark.apache.org
Subject: RE: scala.MatchError: class java.sql.Timestamp
I have created an issue for this
https://issues.apache.org/jira/browse/SPARK-4003
From: Cheng, Hao
Sent: Monday, October 20, 2014 9:20 AM
To: Ge, Yao (Y.); Wang, Daoyuan; user@spark.apache.org
Subject: RE: scala.MatchError: class java.sql.Timestamp
Seems bugs in the JavaSQLContext.getSchema
17 matches
Mail list logo