libraryDependencies

2016-07-26 Thread Martin Somers
my build file looks like libraryDependencies ++= Seq( // other dependencies here "org.apache.spark" %% "spark-core" % "1.6.2" % "provided", "org.apache.spark" %% "spark-mllib_2.11" % "1.6.0", "org.scalanlp" % "breeze_2.11" % "0.7",

Re: libraryDependencies

2016-07-26 Thread Martin Somers
icks.com> wrote: > Also, you'll want all of the various spark versions to be the same. > > On Tue, Jul 26, 2016 at 12:34 PM, Michael Armbrust <mich...@databricks.com > > wrote: > >> If you are using %% (double) then you do not need _2.11. >> >> On Tue, Jul 26,

SVD output within Spark

2016-07-21 Thread Martin Somers
just looking at a comparision between Matlab and Spark for svd with an input matrix N this is matlab code - yes very small matrix N = 2.5903 -0.04160.6023 -0.12362.55960.7629 0.0148 -0.06930.2490 U = -0.3706 -0.92840.0273 -0.92870.3708

sbt build under scala

2016-07-26 Thread Martin Somers
Just wondering Whats is the correct way of building a spark job using scala - are there any changes coming with spark v2 Ive been following this post http://www.infoobjects.com/spark-submit-with-sbt/ Then again Ive been mainly using docker locally what is decent container for submitting

UNSUBSCRIBE

2016-08-10 Thread Martin Somers
-- M

Unsubscribe.

2016-08-09 Thread Martin Somers
Unsubscribe. Thanks M

DCOS - s3

2016-08-21 Thread Martin Somers
I having trouble loading data from an s3 repo Currently DCOS is running spark 2 so I not sure if there is a modifcation to code with the upgrade my code atm looks like this sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "xxx") sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "xxx")

GPU job in Spark 3

2021-04-09 Thread Martin Somers
Hi Everyone !! Im trying to get on premise GPU instance of Spark 3 running on my ubuntu box, and I am following: https://nvidia.github.io/spark-rapids/docs/get-started/getting-started-on-prem.html#example-join-operation Anyone with any insight into why a spark job isnt being ran on the GPU -