Hi,
Unfortunately no. i just used this lib for FM and FFM raw. I thought it
could be a good baseline for your need.
Regards
Maximilien
On 16/04/18 15:43, Sundeep Kumar Mehta wrote:
Hi Maximilien,
Thanks for your response, Did you convert this repo into DStream for
continuous/incremental
Hi Maximilien,
Thanks for your response, Did you convert this repo into DStream for
continuous/incremental training ?
Regards
Sundeep
On Mon, Apr 16, 2018 at 4:17 PM, Maximilien DEFOURNE <
maximilien.defou...@s4m.io> wrote:
> Hi,
>
> I used this repo for FM/FFM : https://github.com/Intel-
>
Hi,
I used this repo for FM/FFM : https://github.com/Intel-bigdata/imllib-spark
Regards
Maximilien DEFOURNE
On 15/04/18 05:14, Sundeep Kumar Mehta wrote:
Hi All,
Any library/ github project to use factorization machine or field
aware factorization machine via online learning for
Hi All,
Any library/ github project to use factorization machine or field aware
factorization machine via online learning for continuous training ?
Request you to please share your thoughts on this.
Regards
Sundeep
It's true that CrossValidator is not parallel currently - see
https://issues.apache.org/jira/browse/SPARK-19357 and feel free to help
review.
On Fri, 7 Apr 2017 at 14:18 Aseem Bansal wrote:
>
>- Limited the data to 100,000 records.
>- 6 categorical feature which go
- Limited the data to 100,000 records.
- 6 categorical feature which go through imputation, string indexing,
one hot encoding. The maximum classes for the feature is 100. As data is
imputated it becomes dense.
- 1 numerical feature.
- Training Logistic Regression through
What is the size of training data (number examples, number features)? Dense
or sparse features? How many classes?
What commands are you using to submit your job via spark-submit?
On Fri, 7 Apr 2017 at 13:12 Aseem Bansal wrote:
> When using spark ml's LogisticRegression,
When using spark ml's LogisticRegression, RandomForest, CrossValidator etc.
do we need to give any consideration while coding in making it scale with
more CPUs or does it scale automatically?
I am reading some data from S3, using a pipeline to train a model. I am
running the job on a spark
How do I build Spark SQL Avro Library for Spark 1.2 ?
I was following this https://github.com/databricks/spark-avro and was able
to build spark-avro_2.10-1.0.0.jar by simply running sbt/sbt package from
the project root.
but we are on Spark 1.2 and need compatible spark-avro jar.
Any idea how
Hello,
Suppose I want to use Spark from an application that I already submit to run
in another container (e.g. Tomcat). Is this at all possible? Or do I have to
split the app into two components, and submit one to Spark and one to the other
container? In that case, what is the
If you want to run the computation on just one machine (using Spark's local
mode), it can probably run in a container. Otherwise you can create a
SparkContext there and connect it to a cluster outside. Note that I haven't
tried this though, so the security policies of the container might be too
It depends on what you want to do with Spark. The following has worked for
me.
Let the container handle the HTTP request and then talk to Spark using
another HTTP/REST interface. You can use the Spark Job Server for this.
Embedding Spark inside the container is not a great long term solution IMO
the script. Thanks!
Best, Oliver
From: Matei Zaharia [mailto:matei.zaha...@gmail.com]
Sent: Tuesday, September 16, 2014 1:31 PM
To: Ruebenacker, Oliver A; user@spark.apache.org
Subject: Re: Spark as a Library
If you want to run the computation on just one machine (using Spark's local
mode
. Thanks!
Best, Oliver
*From:* Matei Zaharia [mailto:matei.zaha...@gmail.com]
*Sent:* Tuesday, September 16, 2014 1:31 PM
*To:* Ruebenacker, Oliver A; user@spark.apache.org
*Subject:* Re: Spark as a Library
If you want to run the computation on just one machine (using Spark's
local
: 16/09/2014 21.18
A: Matei Zahariamailto:matei.zaha...@gmail.com;
user@spark.apache.orgmailto:user@spark.apache.org
Oggetto: RE: Spark as a Library
Hello,
Thanks for the response and great to hear it is possible. But how do I
connect to Spark without using the submit script?
I know
Hi,
I've successfully built 0.9.0-incubating on Solaris using sbt, following
the instructions at http://spark.incubator.apache.org/docs/latest/ and
it seems to work OK. However, when I start it up I get an error about
missing Hadoop native libraries. I can't find any mention of how to
build
Is it an error, or just a warning? In any case, you need to get those libraries
from a build of Hadoop for your platform. Then add them to the
SPARK_LIBRARY_PATH environment variable in conf/spark-env.sh, or to your
-Djava.library.path if launching an application separately.
These libraries
-Original Message-
From: Matei Zaharia [mailto:matei.zaha...@gmail.com]
Sent: Thursday, March 06, 2014 11:44 AM
To: user@spark.apache.org
Subject: Re: Building spark with native library support
Is it an error, or just a warning? In any case, you need to get those libraries
from a build
On 06/03/2014 18:55, Matei Zaharia wrote:
For the native libraries, you can use an existing Hadoop build and
just put them on the path. For linking to Hadoop, Spark grabs it
through Maven, but you can do mvn install locally on your version
of Hadoop to install it to your local Maven cache, and
19 matches
Mail list logo