Thank, Nick.
This worked for me.
val evaluator = new BinaryClassificationEvaluator().
setLabelCol("label").
setRawPredictionCol("ModelProbability").
setMetricName("areaUnderROC")
val auROC = evaluator.evaluate(testResults)
On M
Typically you pass in the result of a model transform to the evaluator.
So:
val model = estimator.fit(data)
val auc = evaluator.evaluate(model.transform(testData)
Check Scala API docs for some details:
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.evaluation.BinaryC
Can you please suggest how I can use BinaryClassificationEvaluator? I tried:
scala> import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
scala> val evaluator = new BinaryClassificationEvaluator()
evaluator: org.ap
DataFrame.rdd returns an RDD[Row]. You'll need to use map to extract the
doubles from the test score and label DF.
But you may prefer to just use spark.ml evaluators, which work with
DataFrames. Try BinaryClassificationEvaluator.
On Mon, 14 Nov 2016 at 19:30, Bhaarat Sharma wrote:
> I am gettin
.
--
Mekal Zheng
Sent with Airmail
发件人: Rishabh Bhardwaj
回复: Rishabh Bhardwaj
日期: July 15, 2016 at 17:28:43
至: Saisai Shao
抄送: Mekal Zheng , spark users
主题: Re: scala.MatchError on stand-alone cluster mode
Hi Mekal,
It may be a scala version mismatch error,kindly check whether you are
The error stack is throwing from your code:
Caused by: scala.MatchError: [Ljava.lang.String;@68d279ec (of class
[Ljava.lang.String;)
at com.jd.deeplog.LogAggregator$.main(LogAggregator.scala:29)
at com.jd.deeplog.LogAggregator.main(LogAggregator.scala)
I think you should debug the
For more details on my question
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-generate-Java-bean-class-for-avro-files-using-spark-avro-project-tp22413.html
Thanks,
Yamini
On Tue, Apr 7, 2015 at 2:23 PM, Yamini Maddirala
wrote:
> Hi Michael,
>
> Yes, I did try spark-avro 0.2.0 datab
Hi Michael,
Yes, I did try spark-avro 0.2.0 databricks project. I am using CHD5.3 which
is based on spark 1.2. Hence I'm bound to use spark-avro 0.2.0 instead of
the latest.
I'm not sure how spark-avro project can help me in this scenario.
1. I have JavaDStream of type avro generic record
:JavaD
Have you looked at spark-avro?
https://github.com/databricks/spark-avro
On Tue, Apr 7, 2015 at 3:57 AM, Yamini wrote:
> Using spark(1.2) streaming to read avro schema based topics flowing in
> kafka
> and then using spark sql context to register data as temp table. Avro maven
> plugin(1.7.7 ver
All values in Hive are always nullable, though you should still not be
seeing this error.
It should be addressed by this patch:
https://github.com/apache/spark/pull/3150
On Fri, Dec 5, 2014 at 2:36 AM, Hao Ren wrote:
> Hi,
>
> I am using SparkSQL on 1.1.0 branch.
>
> The following code leads to
)
case class Instrument(issue: Issue = null)
-Naveen
From: Michael Armbrust [mailto:mich...@databricks.com]
Sent: Wednesday, November 12, 2014 12:09 AM
To: Xiangrui Meng
Cc: Naveen Kumar Pokala; user@spark.apache.org
Subject: Re: scala.MatchError
Xiangrui is correct that is must be a java bean
Xiangrui is correct that is must be a java bean, also nested classes are
not yet supported in java.
On Tue, Nov 11, 2014 at 10:11 AM, Xiangrui Meng wrote:
> I think you need a Java bean class instead of a normal class. See
> example here:
> http://spark.apache.org/docs/1.1.0/sql-programming-guid
I think you need a Java bean class instead of a normal class. See
example here: http://spark.apache.org/docs/1.1.0/sql-programming-guide.html
(switch to the java tab). -Xiangrui
On Tue, Nov 11, 2014 at 7:18 AM, Naveen Kumar Pokala
wrote:
> Hi,
>
>
>
> This is my Instrument java constructor.
>
>
>
I have created an issue for this
https://issues.apache.org/jira/browse/SPARK-4003
From: Cheng, Hao
Sent: Monday, October 20, 2014 9:20 AM
To: Ge, Yao (Y.); Wang, Daoyuan; user@spark.apache.org
Subject: RE: scala.MatchError: class java.sql.Timestamp
Seems bugs in the JavaSQLContext.getSchema
Seems bugs in the JavaSQLContext.getSchema(), which doesn't enumerate all of
the data types supported by Catalyst.
From: Ge, Yao (Y.) [mailto:y...@ford.com]
Sent: Sunday, October 19, 2014 11:44 PM
To: Wang, Daoyuan; user@spark.apache.org
Subject: RE: scala.MatchError: class java.sql.Time
(RemoteTestRunner.java:197)
From: Wang, Daoyuan [mailto:daoyuan.w...@intel.com]
Sent: Sunday, October 19, 2014 10:31 AM
To: Ge, Yao (Y.); user@spark.apache.org
Subject: RE: scala.MatchError: class java.sql.Timestamp
Can you provide the exception stack?
Thanks,
Daoyuan
From: Ge, Yao (Y.) [mailto:y...@ford.com
Can you provide the exception stack?
Thanks,
Daoyuan
From: Ge, Yao (Y.) [mailto:y...@ford.com]
Sent: Sunday, October 19, 2014 10:17 PM
To: user@spark.apache.org
Subject: scala.MatchError: class java.sql.Timestamp
I am working with Spark 1.1.0 and I believe Timestamp is a supported data type
for
17 matches
Mail list logo