Hello there!I'm getting a compilation error when trying to run sbt
mllib/test.Here're the errors:
[info] Compiling 15 Java sources to
/media/SB-1TB/workarea/apache-spark/unsafe/target/scala-2.10/classes...[error]
After an sbt update the tests run. But all the cluster ones fail on task
size should be small in both training and prediction
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/unsafe-compile-error-tp12815p12816.html
Sent from the Apache Spark Developers
Hi guys,
Does anyone know what is the status of this issue?
(https://issues.apache.org/jira/browse/SPARK-2344)
Beniamino was supposed to add an implementation but it seems that his code
was removed from Github
(https://github.com/bdelpizzo/mllib-extension/blob/master/clustering/FCM.scala)
and
In addition, if you want to run a single suite, you may use:
mllib/testOnly $SUITE_NAME
with sbt.
On Jun 21, 2015 10:32 AM, Burak Yavuz brk...@gmail.com wrote:
You need to build an assembly jar for the cluster tests to pass. You may
use 'sbt assembly/assembly'.
Best,
Burak
On Jun 21, 2015
You need to build an assembly jar for the cluster tests to pass. You may
use 'sbt assembly/assembly'.
Best,
Burak
On Jun 21, 2015 3:43 AM, acidghost andreajemm...@gmail.com wrote:
After an sbt update the tests run. But all the cluster ones fail on task
size should be small in both training and
Put them in quotes, e.g.
sbt/sbt mllib/testOnly *NaiveBayesSuite
On Sun, Jun 21, 2015 at 11:15 AM, acidghost andreajemm...@gmail.com wrote:
Something like mllib/testOnly NaiveBayesSuite is what I need!
But it's not working, runs all mllib suites.
--
View this message in context:
One way for this to happen is to have the intermediate data for the
aggregate function be a byte array operated using Unsafe -- that plays very
nicely with the binary data processing we are doing (i.e. fast
serialization, no gc).
The downside is that we'd need to re-implement whatever algorithm
I am running a very simple program (WordCount) on Spark stand-alone mode. I
find that when I examine the
web UI, a couple bugs arise:
1. There is a discrepancy between the number denoting the duration of the
application when I run the history server and the number given by the web UI
(default
Hi all,
How do I deal with the error on the official Jenkins?
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/35412/console
```
Archiving unit tests logs...
Send successful.
Attempting to post to Github...
Post successful.
Archiving artifacts
WARN: No artifacts found that
This is a side effect of the new pull request tester script interacting badly
with a Jenkins plugin, not anything caused by your changes. I'm working on a
fix but in the meantime I'd just trust what SparkQA says.
Sent from my phone
On Jun 21, 2015, at 1:54 PM, Yu Ishikawa
10 matches
Mail list logo