Hi,
Unfortunately no. i just used this lib for FM and FFM raw. I thought it
could be a good baseline for your need.
Regards
Maximilien
On 16/04/18 15:43, Sundeep Kumar Mehta wrote:
Hi Maximilien,
Thanks for your response, Did you convert this repo into DStream for
continuous/incremental
Hi Maximilien,
Thanks for your response, Did you convert this repo into DStream for
continuous/incremental training ?
Regards
Sundeep
On Mon, Apr 16, 2018 at 4:17 PM, Maximilien DEFOURNE <
maximilien.defou...@s4m.io> wrote:
> Hi,
>
> I used this repo for FM/FFM : https://github.com/Intel-
>
Hi,
I used this repo for FM/FFM : https://github.com/Intel-bigdata/imllib-spark
Regards
Maximilien DEFOURNE
On 15/04/18 05:14, Sundeep Kumar Mehta wrote:
Hi All,
Any library/ github project to use factorization machine or field
aware factorization machine via online learning for
It's true that CrossValidator is not parallel currently - see
https://issues.apache.org/jira/browse/SPARK-19357 and feel free to help
review.
On Fri, 7 Apr 2017 at 14:18 Aseem Bansal wrote:
>
>- Limited the data to 100,000 records.
>- 6 categorical feature which go
- Limited the data to 100,000 records.
- 6 categorical feature which go through imputation, string indexing,
one hot encoding. The maximum classes for the feature is 100. As data is
imputated it becomes dense.
- 1 numerical feature.
- Training Logistic Regression through
What is the size of training data (number examples, number features)? Dense
or sparse features? How many classes?
What commands are you using to submit your job via spark-submit?
On Fri, 7 Apr 2017 at 13:12 Aseem Bansal wrote:
> When using spark ml's LogisticRegression,
If you want to run the computation on just one machine (using Spark's local
mode), it can probably run in a container. Otherwise you can create a
SparkContext there and connect it to a cluster outside. Note that I haven't
tried this though, so the security policies of the container might be too
It depends on what you want to do with Spark. The following has worked for
me.
Let the container handle the HTTP request and then talk to Spark using
another HTTP/REST interface. You can use the Spark Job Server for this.
Embedding Spark inside the container is not a great long term solution IMO
the script. Thanks!
Best, Oliver
From: Matei Zaharia [mailto:matei.zaha...@gmail.com]
Sent: Tuesday, September 16, 2014 1:31 PM
To: Ruebenacker, Oliver A; user@spark.apache.org
Subject: Re: Spark as a Library
If you want to run the computation on just one machine (using Spark's local
mode
. Thanks!
Best, Oliver
*From:* Matei Zaharia [mailto:matei.zaha...@gmail.com]
*Sent:* Tuesday, September 16, 2014 1:31 PM
*To:* Ruebenacker, Oliver A; user@spark.apache.org
*Subject:* Re: Spark as a Library
If you want to run the computation on just one machine (using Spark's
local
10 matches
Mail list logo