of
building the project.
On Tue, Dec 10, 2019 at 7:00 AM Deepak Vohra wrote:
> The initial question was to build from source. Any reason to build when
> binaries are available at https://spark.apache.org/downloads.html
>
> On Tuesday, December 10, 2019, 03:05:44 AM UTC, Ping Liu <
> ping
s - DZone Open Source
>
> This article explains and provides solutions for some of the most common
> errors developers come across when inst...
> <https://dzone.com/articles/working-on-apache-spark-on-windows>
>
>
>
> On Monday, December 9, 2019, 11:27:53 p.m. UTC, Ping
such as Pyspark and R) on the Spark executor hosts,
sometimes with conflicting versions."
> Running Spark in Docker Containers on YARN
>
> Running Spark in Docker Containers on YARN
>
>
>
>
>
> On Monday, December 9, 2019, 08:37:47 p.m. UTC, Ping Liu <
pingpinga...@gmail.
google.guava
> guava
>
>
>
>
> com.google.guava
> guava
> 28.1-jre
>
>
>
>
>
> On Friday, December 6, 2019, 10:12:55 p.m. UTC, Ping Liu <
> pingpinga...@gmail.com> wrote:
>
>
> Hi Deepak,
>
> Following your suggestion, I put
nrepo\com\google\guava\guava>ls
14.0.1 16.0.1 18.0 19.0
On Thu, Dec 5, 2019 at 5:12 PM Deepak Vohra wrote:
> Just to clarify, excluding Hadoop provided guava in pom.xml is an
> alternative to using an Uber jar, which is a more involved process.
>
> On Thursday, December
ations on Cloud Datapr...
>
> Learn how to set up Java imported packages for Apache Spark on Cloud
> Dataproc to avoid conflicts.
>
> <https://cloud.google.com/blog/products/data-analytics/managing-java-dependencies-apache-spark-applications-cloud-dataproc>
>
>
>
>
t supported by the Spark version. What is the Spark and Guava
> versions? Use a more recent Guava version dependency in Maven pom.xml.
>
> Regarding Docker, a cloud platform instance such as EC2 could be used with
> Hyper-V support.
>
> On Thursday, December 5, 2019, 10:51:59 P
at 2:38 PM Sean Owen wrote:
> No, the build works fine, at least certainly on test machines. As I
> say, try running from the actual Spark home, not bin/. You are still
> running spark-shell there.
>
> On Thu, Dec 5, 2019 at 4:37 PM Ping Liu wrote:
> >
> > Hi Sean,
> >
Are you sure it succeeded?
> Try running from the Spark home dir, not bin.
> I know we do run Windows tests and it appears to pass tests, etc.
>
> On Thu, Dec 5, 2019 at 3:28 PM Ping Liu wrote:
> >
> > Hello,
> >
> > I understand Spark is preferably built on Linux
home dir, not bin.
> I know we do run Windows tests and it appears to pass tests, etc.
>
> On Thu, Dec 5, 2019 at 3:28 PM Ping Liu wrote:
> >
> > Hello,
> >
> > I understand Spark is preferably built on Linux. But I have a Windows
> machine with a slow Virtual Bo
Hello,
I understand Spark is preferably built on Linux. But I have a Windows
machine with a slow Virtual Box for Linux. So I wish I am able to build
and run Spark code on Windows environment.
Unfortunately,
# Apache Hadoop 2.6.X
./build/mvn -Pyarn -DskipTests clean package
# Apache Hadoop
11 matches
Mail list logo