I think somewhere alone the line you've not specified your label
column -- it's defaulting to "label" and it does not recognize it, or
at least not as a binary or nominal attribute.
On Sun, Sep 6, 2015 at 5:47 AM, Terry Hole wrote:
> Hi, Experts,
>
> I followed the g
Hi, Experts,
I followed the guide of spark ml pipe
<http://spark.apache.org/docs/latest/ml-guide.html> to test
DecisionTreeClassifier on spark shell with spark 1.4.1, but always meets
error like following, do you have any idea how to fix this?
The error stack:
*java.lang.IllegalArgumentExc
Hi,
Am I doing something off base to execute tests for core module using
sbt as follows?
[spark]> core/test
...
[info] KryoSerializerAutoResetDisabledSuite:
[info] - sort-shuffle with bypassMergeSort (SPARK-7873) (53 milliseconds)
[info] - calling deserialize() after deserializeStream()
in spark streaming with kafka. I can see that
> spark streaming is checkpointing to the mentioned directory at hdfs. How
> can
> i test that it works fine and recover with no data loss ?
>
>
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-spark-user
Hi!
I have enables check pointing in spark streaming with kafka. I can see that
spark streaming is checkpointing to the mentioned directory at hdfs. How can
i test that it works fine and recover with no data loss ?
Thanks
--
View this message in context:
http://apache-spark-user-list
I'd suggest setting sbt to fork when running tests.
On Wed, Aug 26, 2015 at 10:51 AM, Mike Trienis
wrote:
> Thanks for your response Yana,
>
> I can increase the MaxPermSize parameter and it will allow me to run the
> unit test a few more times before I run out of memory.
Thanks for your response Yana,
I can increase the MaxPermSize parameter and it will allow me to run the
unit test a few more times before I run out of memory.
However, the primary issue is that running the same unit test in the same
JVM (multiple times) results in increased memory (each run of
eSize=512m
true
1
false
true
test
Hello,
I am using sbt and created a unit test where I create a `HiveContext` and
execute some query and then return. Each time I run the unit test the JVM
will increase it's memory usage until I get the error:
Internal error when running tests: java.lang.OutOfMemoryError: PermGen space
Exce
Thanks Chenghao!
At 2015-08-25 13:06:40, "Cheng, Hao" wrote:
Yes, check the source code
under:https://github.com/apache/spark/tree/master/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst
From: Todd [mailto:bit1...@163.com]
Sent: Tuesday, August 25, 2015 1:01
Yes, check the source code under:
https://github.com/apache/spark/tree/master/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst
From: Todd [mailto:bit1...@163.com]
Sent: Tuesday, August 25, 2015 1:01 PM
To: user@spark.apache.org
Subject: Test case for the spark sql catalyst
Hi, Are
Hi, Are there test cases for the spark sql catalyst, such as testing the rules
of transforming unsolved query plan?
Thanks!
where applicable
15/07/16 12:31:58 WARN hdfs.DFSClient: DFSInputStream has been closed already
Hi Mike: I am new to Hibench, so I just setup a test enviroment of 1 node
spark/hadoop cluster to test, no data actually. Because Hibench will
auto
eption didn't happen to other localized files, such
as country_codes
FYI
On Wed, Jul 15, 2015 at 8:53 PM, wrote:
> Hi all
>
> when I am running my HiBench in my spark/Hadoop/Hive cluster. I
> found there is always a failure in my aggregation test. I doubt this
> problem
I can't tell immediately, but you might be able to get more info with the
hint provided here:
http://stackoverflow.com/questions/27980781/spark-task-not-serializable-with-simple-accumulator
(short version, set -Dsun.io.serialization.extendedDebugInfo=true)
Also, unless you're simplifying your exam
hi ,all
there two examples one is throw Task not serializable when execute in spark
shell,the other one is ok,i am very puzzled,can anyone give what's different
about this two code and why the other is ok
1.The one which throw Task not serializable :
import org.apache.spark._
import SparkConte
Dear List,
I'm trying to reference a lonely message to this list from March 25th,(
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Maven-Test-error-td22216.html
), but I'm unsure this will thread properly. Sorry, if didn't work out.
Anyway, using Spark 1.4.0-RC4 I ru
che.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:52)
> at
>
> org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:433)
> at
>
> org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
> 15/03/06 17:39:32 IN
adoopUtil.scala:52)
at
org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:433)
at
org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
15/03/06 17:39:32 INFO yarn.ApplicationMaster: AppMaster received a signal./
--
View this message in context:
> Do you get this failure repeatedly?
>
>
>
> On Thu, May 14, 2015 at 12:55 AM, kf wrote:
>
>> Hi, all, i got following error when i run unit test of spark by
>> dev/run-tests
>> on the latest "branch-1.4" branch.
>>
>> the latest com
Yes it is repeatedly on my locally Jenkins.
发自我的 iPhone
在 2015年5月14日,18:30,"Tathagata Das"
mailto:t...@databricks.com>> 写道:
Do you get this failure repeatedly?
On Thu, May 14, 2015 at 12:55 AM, kf
mailto:wangf...@huawei.com>> wrote:
Hi, all, i got following error w
Do you get this failure repeatedly?
On Thu, May 14, 2015 at 12:55 AM, kf wrote:
> Hi, all, i got following error when i run unit test of spark by
> dev/run-tests
> on the latest "branch-1.4" branch.
>
> the latest commit id:
> commit d518c0369fa412567855980c3f0f42
Hi, all, i got following error when i run unit test of spark by dev/run-tests
on the latest "branch-1.4" branch.
the latest commit id:
commit d518c0369fa412567855980c3f0f426cde5c190d
Author: zsxwing
Date: Wed May 13 17:58:29 2015 -0700
error
[
I'm also getting the same error.
Any ideas?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-unit-test-fails-tp22368p22798.html
Sent from the Apache Spark User List mailing list archive at Nabbl
The standard incantation -- which is a little different from standard
Maven practice -- is:
mvn -DskipTests [your options] clean package
mvn [your options] test
Some tests require the assembly, so you have to do it this way.
I don't know what the test failures were, you didn't post
Hi,
I selected a "starter task" in JIRA, and made changes to my github fork of
the current code.
I assumed I would be able to build and test.
% mvn clean compile was fine
but
%mvn package failed
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.18:tes
of frequency counts, to Pearson Chi-Square correlation statistics and
> perform a Chi-Squared hypothesis test. The user response data represents a
> multiple choice question-answer (MCQ) format. The goal is to compute all
> choose-two combinations of question answers (precondition, question
f user response data, to contingency
tables of frequency counts, to Pearson Chi-Square correlation statistics
and perform a Chi-Squared hypothesis test. The user response data
represents a multiple choice question-answer (MCQ) format. The goal is to
compute all choose-two combinations of q
It's because your tests are running in parallel and you can only have one
context running at a time.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-run-unit-test-tp14459p22429.html
Sent from the Apache Spark User List mailing list archi
Unknown Source)
> [info] at java.lang.ClassLoader.defineClass(Unknown Source)
> [info] at java.security.SecureClassLoader.defineClass(Unknown Source)
> [info] at java.net.URLClassLoader.defineClass(Unknown Source)
> [info] at java.net.URLClassLoader.access$100(Unknown Source)
Hi experts,
I am trying to write unit tests for my spark application which fails with
javax.servlet.FilterRegistration error.
I am using CDH5.3.2 Spark and below is my dependencies list.
val spark = "1.2.0-cdh5.3.2"
val esriGeometryAPI = "1.2"
val csvWriter = "1.0.0"
ctFile("tachyon://host:19998/test")
> and
> rdd.saveAsTextFile("tachyon://host:19998/test") succeed, but
> rdd.toDF().saveAsParquetFile("tachyon://host:19998/test") failure.
>
> ERROR MESSAGE:java.lang.IllegalArgumentException: Wrong FS:
> tac
spark version is 1.3.0 with tanhyon-0.6.1
QUESTION DESCRIPTION: rdd.saveAsObjectFile("tachyon://host:19998/test") and
rdd.saveAsTextFile("tachyon://host:19998/test") succeed, but
rdd.toDF().saveAsParquetFile("tachyon://host:19998/test&
I use command to run Unit test, as follow:
./make-distribution.sh --tgz --skip-java-test -Pscala-2.10 -Phadoop-2.3
-Phive -Phive-thriftserver -Pyarn -Dyarn.version=2.3.0-cdh5.1.2
-Dhadoop.version=2.3.0-cdh5.1.2
mvn -Pscala-2.10 -Phadoop-2.3 -Phive -Phive-thriftserver -Pyarn
-Dyarn.version=2.3.0
On Fri, Mar 6, 2015 at 2:47 PM, nitinkak001 wrote:
> I am trying to run a Hive query from Spark using HiveContext. Here is the
> code
>
> / val conf = new SparkConf().setAppName("HiveSparkIntegrationTest")
>
>
> conf.set("spark.executor.extraClassPath",
> "/opt/cloudera/parcels/CDH-5.2.0-1.cdh
er received a signal./
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/HiveContext-test-Spark-Context-did-not-initialize-after-waiting-1ms-tp21953.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
---
e-spark-user-list.1001560.n3.nabble.com/Spark-Standard-Application-to-Test-tp21803.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additio
Sent from my iPhone
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
I am using Spark-1.1.1. When I used "sbt test", I ran into the
following exceptions. Any idea how to solve it? Thanks! I think
somebody posted this question before, but no one seemed to have
answered it. Could it be the version of "io.netty" I put in my
build.sbt? I in
Hi,
I extended org.apache.spark.streaming.TestSuiteBase for some testing, and I
was able to run this test fine:
test("Sliding window join with 3 second window duration") {
val input1 =
Seq(
Seq("req1"),
Seq("req2", "req3"),
gt; >
> > logger.warn("!!! DEBUG !!! target: {}", r.getURI());
> >
> > String response = r.accept(MediaType.APPLICATION_JSON_TYPE)
> >//.header("")
> >.get(String.class);
> >
&g
t;
> logger.warn("!!! DEBUG !!! target: {}", r.getURI());
>
> String response = r.accept(MediaType.APPLICATION_JSON_TYPE)
>//.header("")
>.get(String.class);
>
> logger.warn("!!! DEBUG !!! Spotlight resp
!! Spotlight response: {}", response);
It seems to work when I use spark-submit to submit the application that
includes this code.
Funny thing is, now my relevant unit test does not run, complaining about
not having enough memory:
Java HotSpot(TM) 64-Bit Server VM warning: INFO:
os::commit_
org.apache.hadoop
hadoop-common
junit
junit
4.11
test
org.apache.avro
avro
1.7.7
javax.servlet
*
org.apache.hadoop
hadoop-common
2.4.0
!!! target: {}", target.getUri().toString());
>
> String response =
> target.request().accept(MediaType.APPLICATION_JSON_TYPE).get(String.class);
>
> logger.warn("!!! DEBUG !!! Spotlight response: {}", response);
>
> When run inside a unit test as follo
}", target.getUri().toString());
String response =
target.request().accept(MediaType.APPLICATION_JSON_TYPE).get(String.class);
logger.warn("!!! DEBUG !!! Spotlight response: {}", response);
When run inside a unit test as follows:
mvn clean test -Dtest=SpotlightTest#testC
scala.Option.foreach(Option.scala:236)
On 15.12.2014, at 22:36, Marius Soutier wrote:
> Ok, maybe these test versions will help me then. I’ll check it out.
>
> On 15.12.2014, at 22:33, Michael Armbrust wrote:
>
>> Using a single SparkContext should not cause this problem. In the SQ
Ok, maybe these test versions will help me then. I’ll check it out.
On 15.12.2014, at 22:33, Michael Armbrust wrote:
> Using a single SparkContext should not cause this problem. In the SQL tests
> we use TestSQLContext and TestHive which are global singletons for all of our
> uni
ent it, i.e.
> fork in Test := true and isolated. Can you confirm that reusing a single
> SparkContext for multiple tests poses a problem as well?
>
> Other than that, just switching from SQLContext to HiveContext also
> provoked the error.
>
>
> On 15.12.2014, at 20:22, Mich
Possible, yes, although I’m trying everything I can to prevent it, i.e. fork in
Test := true and isolated. Can you confirm that reusing a single SparkContext
for multiple tests poses a problem as well?
Other than that, just switching from SQLContext to HiveContext also provoked
the error.
On
org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
>
> I can only prevent this from happening by using isolated Specs tests thats
> always create a new SparkContext that is not shared between tests (but
> there can also be only a single SparkContext per test), and also by using
&
)
org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
I can only prevent this from happening by using isolated Specs tests thats
always create a new SparkContext that is not shared between tests (but there
can also be only a single SparkContext per test), and also by using standard
SQLContext instead of
Hi,
https://github.com/databricks/spark-perf/tree/master/streaming-tests/src/main/scala/streaming/perf
contains some performance tests for streaming. There are examples of how to
generate synthetic files during the test in that repo, maybe you
can find some code snippets that you can use there
on to my local Spark, it waits for a file to be
written to a given directory, and when I create that file it successfully
prints the number of words. I terminate the application by pressing Ctrl+C.
Now I've tried to create a very basic unit test for this functionality, but
in the test I was n
I am trying to look at problems reading a data file over 4G. In my testing
I am trying to create such a file.
My plan is to create a fasta file (a simple format used in biology)
looking like
>1
TCCTTACGGAGTTCGGGTGTTTATCTTACTTATCGCGGTTCGCTGCCGCTCCGGGAGCCCGGATAGGCTGCGTTAATACCTAAGGAGCGCGTATTG
>2
G
I don't think 'sbt assembly' would touch local maven repo for Spark.
Looking at dependency:tree output:
[INFO] org.apache.spark:spark-streaming_2.10:jar:1.1.0-SNAPSHOT
[INFO] +- org.apache.spark:spark-core_2.10:jar:1.1.0-SNAPSHOT:compile
spark-streaming only depends on spark-core other than thir
Hello,
Specifying '-DskipTests' on commandline worked, though I can't be sure
whether first running 'sbt assembly' also contributed to the solution.
(I've tried 'sbt assembly' because branch-1.1's README says to use sbt).
Thanks for the answer.
Kind regards,
Emre Sevinç
Please specify '-DskipTests' on commandline.
Cheers
On Dec 5, 2014, at 3:52 AM, Emre Sevinc wrote:
> Hello,
>
> I'm currently developing a Spark Streaming application and trying to write my
> first unit test. I've used Java for this application, and I also
Hello,
I'm currently developing a Spark Streaming application and trying to write
my first unit test. I've used Java for this application, and I also need
use Java (and JUnit) for writing unit tests.
I could not find any documentation that focuses on Spark Streaming unit
testing, a
Test message
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/This-is-just-a-test-tp19895.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e
Dear all,
We encountered problems of failed jobs with huge amount of data.
A simple local test was prepared for this question at
https://gist.github.com/copy-of-rezo/6a137e13a1e4f841e7eb
It generates 2 sets of key-value pairs, join them, selects distinct values
and counts data finally.
object
The problem was mine: I was using FunSuite in the wrong way. The test
method of FunSuite registers a "test" to be triggered when the tests are
running. My localTest is instead creating and stopping the SparkContext
during the test registration and as result my SparkContext is stopped
40 PM, Mario Pastorelli
wrote:
> I would like to use FunSuite to test my Spark jobs by extending FunSuite with
> a new function, called localTest, that runs a test with a default
> SparkContext:
>
> class SparkFunSuite extends FunSuite {
>
> def localTest(name : String
I would like to use FunSuite
<http://doc.scalatest.org/2.2.1/index.html#org.scalatest.FunSuite> to
test my Spark jobs by extending FunSuite with a new function, called
|localTest|, that runs a test with a default SparkContext:
|class SparkFunSuite extends FunSuite {
def localTes
Hi Please can someone advice on this.
On Wed, Sep 17, 2014 at 6:59 PM, VJ Shalish wrote:
> I am trying to benchmark spark in a hadoop cluster.
> I need to design a sample spark job to test the CPU utilization, RAM
> usage, Input throughput, Output throughput and Duration of executi
When I run `sbt "test-only SparkTest"` or `sbt "test-only SparkTest1"`, it
was pass. But run `set test` to tests SparkTest and SparkTest1, it was
failed.
If merge all cases into one file, the test was pass.
--
View this message in context:
http://apache-spark-user-list.1
I am trying to benchmark spark in a hadoop cluster.
I need to design a sample spark job to test the CPU utilization, RAM usage,
Input throughput, Output throughput and Duration of execution in the
cluster.
I need to test the state of the cluster for :-
A spark job which uses high CPU
A spark
I use
https://github.com/apache/spark/blob/master/streaming/src/test/scala/org/apache/spark/streaming/TestSuiteBase.scala
to help me with testing.
In spark 9.1 my tests depending on TestSuiteBase worked fine. As soon as I
switched to latest (1.0.1) all tests fail. My sbt import is
o1982 wrote:
> Hi Xiangrui,
>
> You can refer to < in R>>, there are many stander hypothesis test to do regarding to linear
> regression and logistic regression, they should be implement in the fist
> order, then we will list some other testes, which are also important whe
Hi Xiangrui,
You can refer to <>, there are many stander hypothesis test to do regarding to linear
regression and logistic regression, they should be implement in the fist order,
then we will list some other testes, which are also important when using
logistic regression to build score
:50 PM, guxiaobo1982 wrote:
> Hi,
>
> From the documentation I think only the model fitting part is implement,
> what about the various hypothesis test and performance indexes used to
> evaluate the model fit?
>
> R
Hi,
From the documentation I think only the model fitting part is implement, what
about the various hypothesis test and performance indexes used to evaluate the
model fit?
Regards,
Xiaobo Gu
Hi TD,
I tried some different setup on maven these days, and now I can at least get
something when running "mvn test". However, it seems like scalatest cannot
find the test cases specified in the test suite.
Here is the output I get:
<http://apache-spark-user-list.1001560.n3.na
Does it not show the name of the testsuite on stdout, showing that it has
passed? Can you try writing a small "test" unit-test, in the same way as
your kafka unit test, and with print statements on stdout ... to see
whether it works? I believe it is some configuration issue in maven, whi
Thank you TD,
I have worked around that problem and now the test compiles.
However, I don't actually see that test running. As when I do "mvn test", it
just says "BUILD SUCCESS", without any TEST section on stdout.
Are we suppose to use "mvn test" to run th
ng to run the KafkaStreamSuite.scala unit
> test.
> I added "scalatest-maven-plugin" to my pom.xml, then ran "mvn test", and got
> the follow error message:
>
> error: object Utils in package util cannot be accessed in package
> org.apache.spark.util
>
Hi TD,
I encountered a problem when trying to run the KafkaStreamSuite.scala unit
test.
I added "scalatest-maven-plugin" to my pom.xml, then ran "mvn test", and got
the follow error message:
error: object Utils in package util cannot be accessed in package
o
This helps a lot!!
Thank you very much!
Jiajia
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unit-Test-for-Spark-Streaming-tp11394p11396.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Appropriately timed question! Here is the PR that adds a real unit
test for Kafka stream in Spark Streaming. Maybe this will help!
https://github.com/apache/spark/pull/1751/files
On Mon, Aug 4, 2014 at 6:30 PM, JiajiaJing wrote:
> Hello Spark Users,
>
> I have a spark streaming pro
Hello Spark Users,
I have a spark streaming program that stream data from kafka topics and
output as parquet file on HDFS.
Now I want to write a unit test for this program to make sure the output
data is correct (i.e not missing any data from kafka).
However, I have no idea about how to do this
Hi guys,
I want to use Elasticsearch and HBase in my spark project, I want to create
a test. I pulled up ES and Zookeeper, but if I put "val htest = new
HBaseTestingUtility()" to my app I got a strange exception (compilation
time, not runtime).
https://gist.github.com/b0c1/4a4b3f6350
karound :
>>> 1) download compiled winutils.exe from
>>> http://social.msdn.microsoft.com/Forums/windowsazure/en-US/28a57efb-082b-424b-8d9e-731b1fe135de/please-read-if-experiencing-job-failures?forum=hdinsight
>>> 2) put this file into d:\winutil\bin
>>> 3) add in
>>
>> I found the next workaround :
>> 1) download compiled winutils.exe from
>> http://social.msdn.microsoft.com/Forums/windowsazure/en-US/28a57efb-082b-424b-8d9e-731b1fe135de/please-read-if-experiencing-job-failures?forum=hdinsight
>> 2) put this file in
=hdinsight
2) put this file into d:\winutil\bin
3) add in my test: System.setProperty("hadoop.home.dir", "d:\\winutil\\")
after that test runs
Thank you,
Konstantin Kudryavtsev
On Wed, Jul 2, 2014 at 10:24 PM, Denny Lee wrote:
You don't actually need it per se - its ju
/please-read-if-experiencing-job-failures?forum=hdinsight
2) put this file into d:\winutil\bin
3) add in my test: System.setProperty("hadoop.home.dir", "d:\\winutil\\")
after that test runs
Thank you,
Konstantin Kudryavtsev
On Wed, Jul 2, 2014 at 10:24 PM, Denny Lee wrote:
udes "null" though.
>>
>> Could you provide the full stack trace?
>>
>> Andrew
>>
>>
>> 2014-07-02 9:38 GMT-07:00 Konstantin Kudryavtsev <
>> kudryavtsev.konstan...@gmail.com>:
>>
>> Hi all,
>>>
120)
>>
>>
>> Thank you,
>> Konstantin Kudryavtsev
>>
>>
>> On Wed, Jul 2, 2014 at 8:15 PM, Andrew Or wrote:
>> Hi Konstatin,
>>
>> We use hadoop as a library in a few places in Spark. I wonder why the path
>> includ
Andrew Or wrote:
>> Hi Konstatin,
>>
>> We use hadoop as a library in a few places in Spark. I wonder why the path
>> includes "null" though.
>>
>> Could you provide the full stack trace?
>>
>> Andrew
>>
>>
>> 2014-07
> 2014-07-02 9:38 GMT-07:00 Konstantin Kudryavtsev <
> kudryavtsev.konstan...@gmail.com>:
>
> Hi all,
>>
>> I'm trying to run some transformation on *Spark*, it works fine on
>> cluster (YARN, linux machines). However, when I'm trying to run it on local
>&g
t; I'm trying to run some transformation on *Spark*, it works fine on
> cluster (YARN, linux machines). However, when I'm trying to run it on local
> machine (*Windows 7*) under unit test, I got errors:
>
> java.io.IOException: Could not locate executable null\b
Hi all,
I'm trying to run some transformation on *Spark*, it works fine on cluster
(YARN, linux machines). However, when I'm trying to run it on local machine
(*Windows 7*) under unit test, I got errors:
java.io.IOException: Could not locate executable null\bin\winutils.exe
in
Hello ,I am a new guy on scala &spark, yestday i compile spark from 1.0.0
source code and run test,there is and testcase failed:
For example run command in shell : sbt/sbt "testOnly
org.apache.spark.streaming.InputStreamsSuite"
the testcase: test("socket in
previous test's tearDown
spark = new SparkContext("local", "test spark")
}
@After
def tearDown() {
spark.stop
spark = null //not sure why this helps but it does!
System.clearProperty("spark.master.port")
}
It
run faster. Start-up and shutdown is
time consuming (can add a few seconds per test).
- The downside is that your tests are using the same SparkContext so they are
less independent of each other. I haven’t seen issues with this yet but there
are likely some things that might crop up.
Best
tests.
This can be done by adding the following line in your build.sbt:
parallelExecution in Test := false
Cheers,
Anselme
2014-06-17 23:01 GMT+02:00 SK :
> Hi,
>
> I have 3 unit tests (independent of each other) in the /src/test/scala
> folder. When I run each of them indivi
Hi,
I have 3 unit tests (independent of each other) in the /src/test/scala
folder. When I run each of them individually using: sbt "test-only ",
all the 3 pass the test. But when I run them all using "sbt test", then they
fail with the warning below. I am wondering if t
Hi,
My unit test is failing (the output is not matching the expected output). I
would like to printout the value of the output. But
rdd.foreach(r=>println(r)) does not work from the unit test. How can I print
or write out the output to a file/screen?
thanks.
--
View this message in cont
Hi!
I have two question:
1.
I want to test my application. My app will write the result to elasticsearch
(stage 1) with saveAsHadoopFile. How can I write test case for it? Need to
pull up a MiniDFSCluster? Or there are other way?
My application flow plan:
Kafka => Spark Streaming (enr
I didn't get the original message, only the reply. Ruh-roh.
On Sun, May 11, 2014 at 8:09 AM, Azuryy wrote:
> Got.
>
> But it doesn't indicate all can receive this test.
>
> Mail list is unstable recently.
>
>
> Sent from my iPhone5s
>
> On 2014年5月10日, a
101 - 200 of 210 matches
Mail list logo