On my MacBook with 2.6 GHz Intel i7 CPU, I run zinc.

Here is the tail of mvn build output:

[INFO] Spark Project External Flume ...................... SUCCESS [7.368s]
[INFO] Spark Project External ZeroMQ ..................... SUCCESS [9.153s]
[INFO] Spark Project External MQTT ....................... SUCCESS [5.233s]
[INFO] Spark Project Examples ............................ SUCCESS [49.011s]
[INFO]
------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO]
------------------------------------------------------------------------
[INFO] Total time: 7:42.208s
[INFO] Finished at: Tue Nov 04 18:10:44 PST 2014
[INFO] Final Memory: 48M/500M

FYI

On Tue, Nov 4, 2014 at 5:53 PM, Alessandro Baretta <alexbare...@gmail.com>
wrote:

> Nicholas,
>
> Indeed, I was trying to use sbt to speed up the build. My initial
> experiments with the maven process took over 50 minutes, which on a 4-core
> 2014 MacBookPro seems obscene. Then again, after the failed attempt with
> sbt, mvn clean package took only 13 minutes, leading me to think that most
> of the time was somehow being spent in downloading and building
> dependencies.
>
> Anyway, if sbt is supported it would be great to add docs about somewhere,
> especially since, as you point out, most devs are using it.
>
> Thanks for your help.
>
> Alex
>
> On Tue, Nov 4, 2014 at 5:42 PM, Nicholas Chammas <
> nicholas.cham...@gmail.com
> > wrote:
>
> > Zinc, I believe, is something you can install and run to speed up your
> > Maven builds. It's not required.
> >
> > I get a bunch of warnings when compiling with Maven, too. Dunno if they
> > are expected or not, but things work fine from there on.
> >
> > Many people do indeed use sbt. I don't know where we have documentation
> on
> > how to use sbt (we recently removed it from the README), but sbt/sbt
> clean
> > followed by sbt/sbt assembly should work fine.
> >
> > Maven is indeed the "proper" way to build Spark, but building with sbt is
> > supported too and most Spark devs I believe use it because it's faster
> than
> > Maven.
> >
> > Nick
> >
> > On Tue, Nov 4, 2014 at 8:03 PM, Alessandro Baretta <
> alexbare...@gmail.com>
> > wrote:
> >
> >> Nicholas,
> >>
> >> Yes, I saw them, but they refer to maven, and I'm under the impression
> >> that sbt is the preferred way of building spark. Is indeed maven the
> "right
> >> way"? Anyway, as per your advice I ctrl-d'ed my sbt shell and have ran
> `mvn
> >> -DskipTests clean package`, which completed successfully. So, indeed, in
> >> trying to use sbt I was on a wild goose chase.
> >>
> >> Here's a couple of glitches I'm seeing. First of all many warnings such
> >> as the following:
> >>
> >> [WARNING]
>  assert(windowedStream2.generatedRDDs.contains(Time(10000)))
> >> [WARNING]                            ^
> >> [WARNING]
> >>
> /home/alex/git/spark/streaming/src/test/scala/org/apache/spark/streaming/BasicOperationsSuite.scala:454:
> >> inferred existential type
> >>
> scala.collection.mutable.HashMap[org.apache.spark.streaming.Time,org.apache.spark.rdd.RDD[_$2]]
> >> forSome { type _$2 }, which cannot be expressed by wildcards,  should be
> >> enabled
> >> by making the implicit value scala.language.existentials visible.
> >>
> >> [WARNING]
> >>
> /home/alex/git/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/parquet/FakeParquetSerDe.scala:34:
> >> @deprecated now takes two arguments; see the scaladoc.
> >> [WARNING] @deprecated("No code should depend on FakeParquetHiveSerDe as
> >> it is only intended as a " +
> >> [WARNING]  ^
> >>
> >> [WARNING]
> >>
> /home/alex/git/spark/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala:435:
> >> trait Deserializer in package serde2 is deprecated: see corresponding
> >> Javadoc for more information.
> >> [WARNING]
> >> Utils.getContextOrSparkClassLoader).asInstanceOf[Class[Deserializer]],
> >> [WARNING]                                                        ^
> >>
> >> [WARNING]
> >>
> /home/alex/git/spark/examples/src/main/scala/org/apache/spark/examples/mllib/StreamingKMeans.scala:22:
> >> imported `StreamingKMeans' is permanently hidden by definition of object
> >> StreamingKMeans in package mllib
> >> [WARNING] import org.apache.spark.mllib.clustering.StreamingKMeans
> >> Are they expected?
> >>
> >> Also, mvn complains about not having zinc. Is this a problem?
> >>
> >> [WARNING] Zinc server is not available at port 3030 - reverting to
> normal
> >> incremental compile
> >>
> >> Alex
> >>
> >> On Tue, Nov 4, 2014 at 3:11 PM, Nicholas Chammas <
> >> nicholas.cham...@gmail.com> wrote:
> >>
> >>> FWIW, the "official" build instructions are here:
> >>> https://github.com/apache/spark#building-spark
> >>>
> >>> On Tue, Nov 4, 2014 at 5:11 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> >>>
> >>>> I built based on this commit today and the build was successful.
> >>>>
> >>>> What command did you use ?
> >>>>
> >>>> Cheers
> >>>>
> >>>> On Tue, Nov 4, 2014 at 2:08 PM, Alessandro Baretta <
> >>>> alexbare...@gmail.com>
> >>>> wrote:
> >>>>
> >>>> > Fellow Sparkers,
> >>>> >
> >>>> > I am new here and still trying to learn to crawl. Please, bear with
> >>>> me.
> >>>> >
> >>>> > I just pulled f90ad5d from https://github.com/apache/spark.git and
> am
> >>>> > running the compile command in the sbt shell. This is the error I'm
> >>>> seeing:
> >>>> >
> >>>> > [error]
> >>>> >
> >>>> >
> >>>>
> /home/alex/git/spark/mllib/src/main/scala/org/apache/spark/mllib/linalg/Vectors.scala:32:
> >>>> > object sql is not a member of package org.apache.spark
> >>>> > [error] import org.apache.spark.sql.catalyst.types._
> >>>> > [error]                         ^
> >>>> >
> >>>> > Am I doing something obscenely stupid is the build genuinely broken?
> >>>> >
> >>>> > Alex
> >>>> >
> >>>>
> >>>
> >>>
> >>
> >
>

Reply via email to